Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hey guys, The following code i have written kept given me an error of "Design contain an unbreakable combination cycle", i have tried using delays but it doesn't help. code: for(count2=0; count2<count1; count2++) { if(count2>7 && count2<count1-2){ if(setuppktDplus4[count2]==1 && setuppktDplus4[count2-1]==0) { setuppktDplus4[count2]=0; setuppktDminus4[count2]=1; } else if(setuppktDplus4[count2]==1 && setuppktDplus4[count2-1]==1) {setuppktDplus4[count2]=1; setuppktDminus4[count2]=0;} else if(setuppktDplus4[count2]==0 && setuppktDplus4[count2-1]==0) {setuppktDplus4[count2]=1; setuppktDminus4[count2]=0;} else if(setuppktDplus4[count2]==0 && setuppktDplus4[count2-1]==1) {setuppktDplus4[count2]=0; setuppktDminus4[count2]=1;} } } I noticed that the error only pops up when i declare the array as ram. ram unsigned int 1 setuppktDplus4[140]; ram unsigned int 1 setuppktDminus4[140]; It works fine if i were to delcare it as just an normal array. But when i declare it as an array, i get an mapping error msg while converting the edif to bit file. Error:The design is too large for the given device and package. Please check the Design Summary section to see which resource requirement for your design exceeds the resources available in the device. Number of 4 input LUTs: 4,945 out of 4,704 105%(OVERMAPPED) Will declaring them as a ram use less resources than delcaring them as an array? If so, how can i get around the combinational loop problem when declaring them as a ram? Can someone please enlighten me... Thanks a lot in advance.Article: 73926
Hi all, I am working with the Altera-StratixDevelopementboard from Microtronix (http://www.microtronix.com/product_stratix.htm) and try to get the USB-Connection to the PC working. Has anyone experiences with this and maybe a working Windowsdriver? Or even a tip for me how to getting started? Thanks RomanArticle: 73927
On Thu, 30 Sep 2004 16:45:48 -0600, hamilton <hamilton@deminsional.com> wrote: > Yes, having experience with any language is going to create a > better environment for development. However, a beginner using > forth without a safty net ( i.e. mentor) is just crazy. Do you mean a beginniner programmer or a beginner in Forth? My first paid Forth job was a colour graphics picture editor completed in about five working days on a multi-CPU S100 system. Letting a neophyte programmer be the lead/only programmer on any job is going to be dangerous in any language. > The projects I have followed up on were just that. Beginners > trying to bet the farm an a "quick language". And again the problem is beginners. > My first forth was on a 6502 over 20 years ago. > > Nothing has changed. Just like nothing in C has changed. :-} Stephen -- Stephen Pelc, stephenXXX@INVALID.mpeltd.demon.co.uk MicroProcessor Engineering Ltd - More Real, Less Time 133 Hill Lane, Southampton SO15 5AF, England tel: +44 (0)23 8063 1441, fax: +44 (0)23 8033 9691 web: http://www.mpeltd.demon.co.uk - free VFX Forth downloadsArticle: 73928
Hi If you just need to delay dqs to capture the ddr data then maybe you can infer IBUF DELAY on dqs by giving contrainst in UCF. For Virtex2 devices the delay infered is arround 3ns. Tia "van de Kerkhof" <bvdk@NOSPAMMoce.nl> wrote in message news:<1096459252.191676@news-ext.oce.nl>... > It is ment for dqs delay in a ddr design. > > synthesis is ok the delay line is still there but ISE is deleting them. > > Bram > >Article: 73929
Hi Zohar, My system is running at 75 MHz right now. My fmax is 90+ so I may try a little higher too. I didn't simulate, but I used SignalTapII to verify that I was indeed dma'ing the contents of an external fifo into sdram at a rate of 1 clock per 32bit word. (480 words in ~485 cpu clocks) This was while my system was running out of the same sdram and operating on other sdram data simultaneously - so very good results! I'd like to add that I got these perfect results with the help of the Altera Nios team. I'll be posting this setup on the IP section of the Nios Forum. This was *writing* to sdram, and I haven't really looked at reads yet. Hopefully I'll be just as pleased with those results. In my case reads are not quite as critical as writes, as the data streaming in is real time and waits for no one. Ken "zg" <zohargolan@hotmail.com> wrote in message news:e24ecb44.0409301615.7b8bccaf@posting.google.com... > Hi Ken, > > Thank you for your response. > What frequncy are you running the NIOS? I tried simulating at 72MHz > and I got only 2 words bursts. > Are you using NIOS II? > > Regards, > Zohar > > "Kenneth Land" <kland_not_this@neuralog_not_this.com> wrote in message news:<10lg0aca67jseaa@news.supernews.com>... > > "zg" <zohargolan@hotmail.com> wrote in message > > news:e24ecb44.0409241455.340ba130@posting.google.com... > > > Hi All, > > > > > > I am trying to use the SDR SDRAM controller that is comming with the > > > NIOS II development package. In the simulation it looks like this core > > > supports only 2 words bursts. I couldn't find anything in the > > > documentations. > > > Am I correct? > > > If this core supports bigger bursts then 2 words, any ideas what am I > > > doing wrong? > > > > > > Thank you all > > > Zohar > > > > Zohar, > > > > I'm working on a design that uses one 16MB sdram chip for most of its > > instruction and data memory. I rely heavily on dma and other burst reads > > and writes (ie cache) to get the performance I need. > > > > The sdram controller you're talking about is good enough to burst 480 32 bit > > words in under 485 cpu clocks. It's a beautiful thing to watch in Signal > > Tap as I get this performance. (Haven't explored the lenght limits above > > 480 - my external fifo's AlmostFull level) > > > > I'm hammering that sdram (through the Altera sdram controller) nines ways to > > Sunday and it performs flawlessly. > > > > I have some serious issues with the whole NiosI/II chain, but the sdram > > controller has been a champ. > > > > KenArticle: 73930
Hi, > - Sometimes "Balanced" gives a faster design than "Speed" Speed vs. Balanced mapping only achieves a small performance advantage on average. I forget exactly how much, but would not be surprised if it were <5%. Some designs benefit more than others; some have no benefit at all. And even if the synthesis is doing a better job for speed, you also have some random variation from one place & route run to the next. If you perturb any aspect of the input to P&R (for example, change timing constraints or change synthesis slightly), you can get swings in performance; this noise is dependent on which device family you are using. To truely compare "Speed" vs. "Balanced", you would have to compile the design multiple times for each setting, using a different random seed each time. There is a tool called Design Space Explorer including in the full version of Quartus that automates the process of running seed sweeps (and farms out to multiple machines, etc). That tool will also play with Quartus settings to try to achieve the best performance. For example, it will try Speed vs. Balanced (and other various knobs) to see what works best for your design. > - Sometimes the number of LEs/ALUTs change when the only thing changed > is the speed grade of the devices. I'll give this one a whirl when I get into work. One possibility is somehow the register packer has chosen not to pack some registers in with luts in the C6 case. The delays the fitter sees are different for a C6 than for a C8, and all optimizations are heuristics so if the inputs change, the outputs usually do too. > EP1S10F484C5 Balanced 2931 90.88 MHz > EP2S15F484C3 Balanced 2504(?!) 126.04 MHz And I will take this opportunity to point out that Stratix II was ~39% faster than Stratix I on this particular compile. That's for the naysayers out there who disregard our 50% average performance improvement as marketing b.s. Again, to get a true comparison, we'd have to run multiple seed sweeps on the base Stratix compile and the Stratix II compile. Paul Leventis Altera Corp.Article: 73931
> Also, I should point out that the reported speeds appear to be based on > synthesis estimates -- you really need to run place & route before you can > ever compare results between two architectures. Synthesis performance > estimates give you a reasonable idea of how changes in a given design affect > its performance, and they *should* be tuned to give the right answer on > average over a large set of designs. But on any given design there's no > guarantee the estimate will be close to the final P&R value, and there are > both systematic design-specific and completely random components to this > error. In short, comparing post-synthesis Fmax across two chips on one > design is quite inaccurate. It has been pointed out to me that we do not recommend that users use synthesis estimates of performance at all, as they do not correlate well to place & route results, even for changes made to the same design. This is not from lack of trying, but due to the disconnect between synthesis (3rd party or integrated) and the downstream place & route tools. In the upcoming release of Quartus, there will be a new early timing estimator feature that will give quick, accurate estimates of design performance. This is possible since the estimator is part of the fitter (place & route) and thus has access to all the same information and analysers that the optimization tool uses. The estimator uses algorithms and heuristics developed and tuned by the place & route team. It will not give perfect results, but they should correlate quite closely with full-fledged place & route, without the hit in run time. Regards, Paul Leventis Altera Corp.Article: 73932
> One poster indicated that 128 bits for a unique serial number would be > useful. We have used Dallas one-wire parts for that sort of thing. So > that could put a $1 price on providing a bit of NV memory, but the > Dallas parts also have a bit of EEPROM memory, so again the > reprogrammable feature is important. > > Another feature that we look for often is a way to provide a software > configurable board jumper. This jumper needs to be reprogammable, > control a signal on the board, and be asserted from power up. Currently > we use a Flash PLD for this. Features like theses are why we include 8 Kb of user useable, in-system reprogrammable Flash in Max II. If you can give people a cheap CPLD, and absorb an adjacent ~$1 part too, that's worth a lot to users. But what about FPGAs? On the high end of things, absorbing a $1 part in a $100 FPGA is less interesting, especially if the extra masks and processing needed to make the NV memory increases the cost of the FPGA. This only becomes potentially interesting if footprint is the issue, but serial EEPROMs are pretty tiny. At what point does it become worth absorbing the $1 part into the FPGA? For small low-cost parts (Cyclone, for example, as Guy used in his original post), this could be equivalent to a ~20% savings. But if you incorporate NV in the low-end of the family, you probably do so across the entire family since you want to use the same process, unless it is simple to nuke the extra masks and get back to a process with the exact same costs (production and characterization) of a vanlilla CMOS. And there is the issue of what size process technology you can use, and how much overhead there is to incorporating NV. For example, if you want Flash the smallest available process is maybe 0.13u (from what I've seen in the press), so you take a big cost hit compared to using 90 nm. Also, you need sense amps etc. as well as the bits, so once you through in a small bit of memory, you might as well throw in a little more... And you have ask -- what are you gaining by incorporating an NV RAM? Is it a cost reduction? Why would an FPGA and discrete NV RAM be more expensive than an FPGA including NV RAM? Combining two dice into one larger die can increase cost due to higher chance of defect. Is it for lower footprint? Certainly there could be niches that would like the footprint reduction of combining two functions into one, but is it large enough to support the development and is it worth taxing all the other users who don't care about footprint? Incorporating extra functionality in an FPGA is usually done because it provides some additional advantage beyond absorbing external components. Let's look for a second at incorporating volatile RAM. Altera clearly thinks it is worth including medium-sized SRAM blocks (hence the 512 Kb rams in Stratix and Stratix II). Designs often require one or two large RAMs (off-chip) for long-term data storage, and a few medium RAMs for buffering data (packet headers, line buffers in video apps), and many small RAMs for a slew of applications including buffering. While discrete SRAMs are cheap, you need to consume a lot of I/Os to talk to them, which burns power and means you need to route a lot of traces. So bringing most RAM functions on-chip can pay-off for the small and medium RAMs. More importantly, by using multiple on-chip SRAMs you can achieve higher bandwidth and/or lower latency than off-chip RAMs, so there can be a system performance advantage as well. Yet Xilinx does not include SRAMs larger than 18 Kb in their parts. If the market is divided on incorporating medium SRAMs, my guess is it's a long way from thinking big NV rams are needed. Paul Leventis Altera Corp.Article: 73933
> # Xilinx benefit if MicroBlaze is in the news > > # Such efforts expand usage of, and research in, MicroBlaze > > # It can be a usefull second opinion / benchmark > > # Xilinx will have trademark rights to MicroBlaze, so they can > restrict use of the name. Other examples of this are 6805 uC and i2c > instances. > > # The open source core is only a tiny portion of system development: > you also have compilers/SWdebuggers/HWDebuggers/Libraries, and all of > those will have Xilinx license restrictions for Xilinx FPGAs. Actully, the compiler & I believe the debugger are open source. This means that people are very close to having everything for free. Cheers, JonArticle: 73934
> Here are some numbers: > ASICS are only for extreme designs: > extreme volume, speed, size, low power > Cost of a mask set for different technologies: > 250 nm: $ 100 k > 180 nm : $ 300 k > 130 nm: $ 800 k > 90 nm: $1200 k > 65 nm: $2000 k > plus design, verification and risk. > Hmm. Take those figures with a pinch of salt. At the higher geometries, you can definetly get much cheaper than that. Cheers, JonArticle: 73935
> ASICs have to develop new test methods for each design. Really? The same techniques have been used on all the ASICs I've worked on: Scan test and RAM BIST. With ATPG s/w it is pretty easy to do to as well. Cheers, JonArticle: 73936
Peter Alfke <peter@xilinx.com> wrote in message news:<BD81EC1E.8EC5%peter@xilinx.com>... > > From: glen herrmannsfeldt <gah@ugcs.caltech.edu> > > Organization: University of Washington > > Newsgroups: comp.arch.fpga > > Date: Thu, 30 Sep 2004 16:26:50 -0700 > > Subject: Re: FPGA vs ASIC area > > > > > > > > > > What I hear is even more important with current technology > > is NRE costs, such as masks. Stories are of mask sets in the > > million dollar range. > > > > -- glen > > Here are some numbers: > ASICS are only for extreme designs: > extreme volume, speed, size, low power > Cost of a mask set for different technologies: > 250 nm: $ 100 k > 180 nm : $ 300 k > 130 nm: $ 800 k > 90 nm: $1200 k > 65 nm: $2000 k > plus design, verification and risk. > > We in the FPGA business really know the price of ASICs, for (think about it) > we really design and produce circuits as if they were ASICs. > Our saving grace is that we sell them in large numbers to many customers. > Peter Alfke > > > > And we in the ASIC business really know the price of FPGAs...:-) Given that an ASIC is by definition a close fit to the appication and an FPGA is a less good fit (but possibly in a more advanced technology) the answer to the original question (with lots of caveats) is probably around 5x-10x difference in silicon area (and power consumption!), probably around 2x-5x difference in unit price because of the undeniable economies of scale for FPGAs. So it's simple -- developing an ASIC has higher cost/risk/timescale and the result is less flexible but with higher performance and lower power -- and cheaper *if* you buy enough of them for the lower unit price to make up for the higher NRE costs. This is engineering, not marketing. As the cost of traditional ASIC development goes up with more advanced technologies, the break point for total revenue over lifetime at which a project can justify this is moving up -- nobody in either the FPGA or ASIC business can deny this, which is why the number of such design starts is falling. To close the gap FPGA companies are promoting strategies such as "hard FPGAs" (effectively metal-programmed FPGAs) to reduce power/area/unit cost (but increase NRE cost/time) , and ASIC companies are promoting strategies such as "structured ASIC" (effectively metal-programmed ASICs) to reduce NRE cost/time (but increase power/area/unit price). Since both often include hard-coded blocks such as mutipliers/RAM/ROM there's obviously some convergence going on here... Ian Dedic Chief Engineer, Mixed Signal Division Fujitsu Microelectronics Europe P.S. I'm only really arguing with Peter's use of "extreme" here! P.P.S. On the technology issue, if your pockets were deep enough you could have had your very own 90nm ASIC well before any 90nm FPGAs emerged -- complete with your very own bugs, of course...:-)Article: 73937
Thank you! A very good site! Philip Freidin schrieb: > On Wed, 29 Sep 2004 21:08:14 +0200, André Schekatz <andre.schekatz@ruhr-uni-bochum.de> wrote: > > >>How to find a Evaluation board for Xilinx Virtex II >> >>Hallo, >>please can anybody help me? >>I want to program a Virtex II or Virtex Pro FPGA Chip. Anybody knows an >>evaluation board to program this chips? I want to make a fast analogue >>digital converter and I want to make a performance check. Fist I wand to >>create a program with mathlab and than by hand with VHDL. Please can >>somebody tell me a good Evalation Board? I have a gadget of 1500$. > > > The usual place to start a board search is: > > http://www.fpga-faq.com/FPGA_Boards.shtml > > > > =================== > Philip Freidin > philip.freidin@fpga-faq.com > Host for WWW.FPGA-FAQ.COMArticle: 73938
On Fri, 01 Oct 2004 06:18:58 -0700, Jon Beniston wrote: >> Here are some numbers: >> ASICS are only for extreme designs: >> extreme volume, speed, size, low power >> Cost of a mask set for different technologies: >> 250 nm: $ 100 k >> 180 nm : $ 300 k >> 130 nm: $ 800 k >> 90 nm: $1200 k >> 65 nm: $2000 k >> plus design, verification and risk. >> > > Hmm. Take those figures with a pinch of salt. At the higher > geometries, you can definetly get much cheaper than that. > > Cheers, > Jon Those figures are for pure ASICs, what are the costs for structured ASICs?Article: 73939
Hal, Amazing. This 'rule of tens' must be a universal systems rule. Probably a corollary to Murphy's Law. Austin Hal Murray wrote: > > >> The pain of this >>was so high, that I had a rule, called "Austin's rule of tens." >>Sell ten, recall, fix the bug. >>Sell a hundred, find and fix the next bug. >>Sell a thousand, find and fix the next bug. >>And so on, and so forth. > > > I learned that rule over 15 years ago. I was babysitting for > a large (for the time) email system that had just been hit > with that sort of bug. (Fortunately, the guys who did the > initial work were good friends of mine.) > > The rule was something like: > "Every time the installed base goes up by a factor of 10 > another fatal bug will crawl out of the woodwork." > > I think it's been true for any system I've worked on. > For most sytems, you can get 10 in your lab and 100 > with friendly customers. Then it gets interesting... > (Scale by 10 one way or the other if that fits your > system better.) > > > >>The one time we used an ASIC (as the requirement was slam dunk, no >>issues) we ended up throwing all the ASICs away because the (bell) >>operating company notified us that the requirements doc had a patented >>technique that they did not know of at the time. So rather than get a >>license for it ($$$), they removed it from the requirements document. > > > Interesting point. I remember a time when an FPGA saved my bacon. > Yup. Spec change. Just one bit. Trival to fix. > > So if you are considering FPGA vs ASIC, add the stability > of your requirements to the decision process. Don't forget > to consider bugs in the spec. Committees are great at writing > complicated documents that don't work in obscure corner cases. > Or don't specify what happens and 3 people make incompatible > assumptions. >Article: 73940
I have done read/write access to sram in a basic way in three cycles without any problem. hmurray@suespammers.org (Hal Murray) wrote: > What particular SRAM is on the board that started this thread? It is ISSI IS61LV25616AL. MeteArticle: 73941
Jon, Peter is right: ASIC testing rarely gets more than 95% coverage. The best is about 98% coverage. We can get an arbitrarily high coverage by just increasing our patterns (99.9%+) for 0 added silicon cost. ASICs can not do that. To get any better, they either have to add more logic for BIST (30%+ of a Pentium IV is BIST logic) which increases area and cost and decreases yield, or just be happy with the coverage of the scan chain (which is not all that good). Each BIST or scan chain is unique, and software, test vectors, etc. muct be developed each time anything is new or different. FPGAs have 0% extra area for BIST (they are 100% BIST with a different bitstream!). You must understand that the 405PPC, MGT, DCM, and other "hardened IP" are just like ASICs, so we already know everything there is to know about ASICs, their design, and testing. In fact Xilinx the 3rd largest 'ASIC' manufacturer in the world (behind IBM and NEC -- Gartner/Dataquest 'ASIC/FPGA Vendor Ranking 2003'). FPGA vendors may be the last stronghold of full custom ASIC design left in the world. ASIC houses are mostly standard cell, or structured (basically same thing), with little or no full custom. Our customers tell us that if they want to play with the latest and greatest technologies and designs (like 10 Gbs MGTs), they need to use our FPGAs, because the ASIC cells are a generation behind. Austin Jon Beniston wrote: >>ASICs have to develop new test methods for each design. > > > Really? The same techniques have been used on all the ASICs I've > worked on: Scan test and RAM BIST. With ATPG s/w it is pretty easy to > do to as well. > > Cheers, > JonArticle: 73942
On Thu, 30 Sep 2004 13:23:31 -0700, Peter Alfke wrote: > This seemingly simple question does not have a simple answer. There are > too many variables. Modern FPGAs are leaping ahead of ASICs in the use > of the tightest technology ( 90 nm today). Some FPGA structures are > inherently as efficient as in ASICs (BlockRAM, I/O, transceivers, hard > microprocessors). In other respects FPGAs can be far less efficient in > their silicon use. But FPGAs benefit from a very regular and tight > chip-layout, and they are manufactured by the multi-millions (as opposed > to most ASICs). And finally: Silicon is cheap, silicon area is not > decisive, the total manufacturing and distribution cost is! Don't count > the square microns, count the dollars. > > Peter Alfke Peter, While all of those structures that you mention are as efficient as an ASIC, and probably more efficient because Xilinx can afford to spend more to optimize them, the fact remains that most of the resources on an FPGA are unused in any particular design. Some resources, like multipliers, are only used for certain types of applications. I've been using Xilinx FPGAs since the 3000 series, I've never needed a multiplier once in all of those years. Some resources, like the PPCs, are useful but no one needs as many as many as Xilinx puts on a chip. The Virtex2P has up to four PPCs, who needs four? Virtually every embedded system that I've worked on has one processor, generally an IBM 405 or equivalent, but none has ever had more than one. Xilinx put four on the Virtex2P because silicon guys made the decision, not system designers. Three out of four of the PPCs are just taking up space and burning power, they are useless. Some resources are useful for every design, like Block RAMs, but most of the time you don't use all of them. On Spartan3 there isn't a lot of Block RAM available so you tend to use most of the RAMs, although here you generally reserve several for ChipScope, but the big Virtex (2, 2P, 4) class parts there is so much RAM that you are unlikely to need all of it. And some resources are hugely under used by there very nature, i.e. the interconnect. Most of the area in the FPGA sections of the FPGA is devoted to programable interconnect. Only a few percent of the interconnect resources can be used in any particular design, that's the nature of the beast. If you have a mux in a crosspoint switch that has 8 inputs, 7 out of 8 inputs must go unused. There isn't any way to get around this in a programable device, if you cut down on the interconnect resources the part becomes unroutable. An ASIC that uses metal to make connections has a 50 to 1 advantage over and FPGA in this area. The place where FPGAs win big is NRE, an FPGA design costs millions less than an ASIC design. Even if you take out the cost of the mask set there is a big difference because you don't have to spend as much on verification on an FPGA design as on an ASIC. With an FPGA you can go into the lab with a design that's almost right because fixing the bug is cheap, with an ASIC you have to have a design that's completely right before you build it because you can't fix it later. Clearly if you are building a small number of systems, a few thousand or less, you want to use FPGAs. If you are building a huge number, hundreds of thousands or more, you want an ASIC. In between is a grey area that has to be evaluated for each system.Article: 73943
GS, Well, for the first mask set, those are the costs. For subsequent masks for an individual metal layer, they can range from $10K (upper metal) on up to as much as $350K (poly). So how many layers get changed to make the strcuture? Austin General Schvantzkoph wrote: > On Fri, 01 Oct 2004 06:18:58 -0700, Jon Beniston wrote: > > >>>Here are some numbers: >>>ASICS are only for extreme designs: >>>extreme volume, speed, size, low power >>>Cost of a mask set for different technologies: >>>250 nm: $ 100 k >>>180 nm : $ 300 k >>>130 nm: $ 800 k >>> 90 nm: $1200 k >>> 65 nm: $2000 k >>>plus design, verification and risk. >>> >> >>Hmm. Take those figures with a pinch of salt. At the higher >>geometries, you can definetly get much cheaper than that. >> >>Cheers, >>Jon > > > Those figures are for pure ASICs, what are the costs for structured ASICs?Article: 73944
Hello I have added manually the Peripherial described in the following document to my system. And there were really output results, so it worked quite good. http://direct.xilinx.com/bvdocs/appnotes/xapp529.pdf The next step was, that I altered the entity iDCT, so that it only performs a simple xor operation.This was the only change! But unfortunately the output value is always 0, so can somebody perhaps tell me what went wrong in this example? The IP should be added correctly to the Microblaze processor, is it perhaps possible that I have to set the Data_Out_Valid signal <= 1 before I am able to read from the output port? Here is my entity declaration and the softwarepart for writing and reading! entity iDCT is port ( Clk : in std_logic; Reset : in std_logic; Data_In : in std_logic_vector(15 downto 0); Data_In_Valid : in std_logic; Read_Data_In : out std_logic; Data_Out_Full : in std_logic; Data_Out : out std_logic_vector(31 downto 0); Data_Out_Valid : out std_logic); end entity iDCT; architecture IMP of iDCT is begin Data_Out <= ("0000000000000000") & (Data_in xor ("0000000000000001")); end architecture IMP; k=2222; microblaze_nbwrite_datafsl(k,0); k=0; microblaze_nbread_datafsl(k,0); xil_printf("X...OR :%d -- ",k); Thanks for your help! RArticle: 73945
You have to handle the other signals in the protocol like the data_in_valid,... You also have to set the Data_Out_Valid when you actually have valid data. Göran Roger Planger wrote: >Hello > >I have added manually the Peripherial described in the following document to >my system. >And there were really output results, so it worked quite good. > >http://direct.xilinx.com/bvdocs/appnotes/xapp529.pdf > >The next step was, that I altered the entity iDCT, so that it only performs >a simple xor operation.This was the only change! >But unfortunately the output value is always 0, so can somebody perhaps tell >me what went wrong in this example? >The IP should be added correctly to the Microblaze processor, is it perhaps >possible that I have to set the Data_Out_Valid signal <= 1 before I am able >to read from the output port? > >Here is my entity declaration and the softwarepart for writing and reading! > >entity iDCT is >port ( >Clk : in std_logic; > >Reset : in std_logic; > >Data_In : in std_logic_vector(15 downto 0); > >Data_In_Valid : in std_logic; > >Read_Data_In : out std_logic; > >Data_Out_Full : in std_logic; > >Data_Out : out std_logic_vector(31 downto 0); > >Data_Out_Valid : out std_logic); > >end entity iDCT; > >architecture IMP of iDCT is > >begin > >Data_Out <= ("0000000000000000") & (Data_in xor ("0000000000000001")); > >end architecture IMP; > > k=2222; > microblaze_nbwrite_datafsl(k,0); > k=0; > microblaze_nbread_datafsl(k,0); > xil_printf("X...OR :%d -- ",k); > >Thanks for your help! >R > > > > > > >Article: 73946
I got the Spartan-3 Starter Kit yesterday from Xilinx. This board is a really good bargain: A XC3S200 and 1MB SRAM for just $ 99,-. This board makes it hard for guys like Tony Burch or me to sell FPGA boards ;-( Only the Flash is a little bit small.... Not too much space left for application data. However, the board and the documentation is fine. It took me only half a day to port JOP (a Java processor) from the Altera Cyclone to the Spartan (thanks to Ed Anuff who did the hard part and wrote a memory generator for Xilinx). Just two Xilinx specific files for the top-level and the memory interface. You can find a Xilinx ISE project under xilinx/s3sk for JOP on this board. If you have such a board and want to try out JOP: Download the JOP sources from: http://www.jopdesign.com/download.jsp Compile the ISE project under ../xilinx/s3sk Download JOP to the FPGA Connect a serial cabel from your PC to the board Open a command prompt in ../java/target Change the COM-port in doit.bat type: doit test test Clock that's it, a small Java program should now run on the Spartan! Martin ---------------------------------------------- JOP - a Java Processor core for FPGAs: http://www.jopdesign.com/Article: 73947
Hi there, We have a design that uses 3 MGTs, which we use 16 bits wide. To ensure we have byte alignment properly, we set the ALIGN_COMMA_MSB to true on the GT_CUSTOM instances. When not sending any data, we simply send comma signals over the line. However, it would appear that something is wrong. When we start the system, the MGTs seem to be aligned only to 8 bits, so 50% of the time the data is a byte out of phase. I can see this by dumping the output of the MGT over the serial line (as past as it copes) - for each MGT it's either consistantly the comma value or consistantly the comma value rotated 8 bits. To make things more confusing, when we send packets (as delimited by a start and end packet symbol) on the MGTs that aren't aligned will ignore/lose/drop the first packet, and then subsequently by correctly byte aligned, and all subsequent packets get through perfectly. Does anyone have any suggestions to what might be causing this? Losing the first packet really isn't tollerable, as in our system the lines will run through a switch which will mean that the MGTs will have to clock synchronise/detect alignment on a very regular basis, and losing the first packet will cause problems. In case it's of relevance, the component decleration of the GT_CUSTOM part species the generic parameter ALIGN_COMMA_MSB as boolean, defaulting to true, and in the instantiation this is overridden with the value true as well. Cheers, -- Michael Dales University of Cambridge Computer Laboratory http://www.cl.cam.ac.uk/~mwd24/Article: 73948
Hi, I'm getting an HDL ADVISOR message when I synthesize this code. Basically, it is a big shift register that shift-in data on each rising edge of mclk if srclkRise is high, and outputs its data on the rising edge of mclk if lrclkRise is high. I don't quite understand the advice here. Can anybody help? Thanks, David --ADVICE INFO:Xst:741 - HDL ADVISOR - A 9-bit shift register was found for signal <datain<8>> and currently occupies 9 logic cells (5 slices). Removing the set/reset logic would take advantage of SRL16 (and derived) primitives and reduce this to 1 logic cells (1 slices). Evaluate if the set/reset can be removed for this simple shift register. The majority of simple pipeline structures do not need to be set/reset operationally. --CODE always @ (posedge mclk or negedge resetb) begin if (~resetb) begin datain <= 0; leftdata <= 0; rightdata <= 0; end else begin if (sclkRise == 1) begin datain[63:1] <= datain[62:0]; datain[0] <= sdata; end if (lrclkRise == 1) begin leftdata <= datain[63:40]; rightdata <= datain[31:8]; end end endArticle: 73949
Paul- you make some good points in your discussions especially about the $1 extra cost compared to a $100 FPGA as well as highlighting the significance of saving that same $1 if it was in a Cyclone low cost device. However, how would you change your mind if some of the newer developing NV memory technologies were similar in size to flash cell/dram cell (<< area of SRAM), also used standard leading edge CMOS process (unlike Flash or DRAM), thus much more cost effective bulk memory without adding any process premium to the rest of the chip? Though maybe not the holy grail, don't you think it provides a good amount of value as cited by some of Hals, Jim's and Nicholas' posts? There are probably even other ideas out there for value. -superfpga "Paul Leventis (at home)" <paulleventis-news@yahoo.ca> wrote in message news:xK-dnahhrZsYyMDcRVn-tQ@rogers.com... > > One poster indicated that 128 bits for a unique serial number would be > > useful. We have used Dallas one-wire parts for that sort of thing. So > > that could put a $1 price on providing a bit of NV memory, but the > > Dallas parts also have a bit of EEPROM memory, so again the > > reprogrammable feature is important. > > > > Another feature that we look for often is a way to provide a software > > configurable board jumper. This jumper needs to be reprogammable, > > control a signal on the board, and be asserted from power up. Currently > > we use a Flash PLD for this. > > Features like theses are why we include 8 Kb of user useable, in-system > reprogrammable Flash in Max II. If you can give people a cheap CPLD, and > absorb an adjacent ~$1 part too, that's worth a lot to users. > > But what about FPGAs? > > On the high end of things, absorbing a $1 part in a $100 FPGA is less > interesting, especially if the extra masks and processing needed to make the > NV memory increases the cost of the FPGA. This only becomes potentially > interesting if footprint is the issue, but serial EEPROMs are pretty tiny. > > At what point does it become worth absorbing the $1 part into the FPGA? For > small low-cost parts (Cyclone, for example, as Guy used in his original > post), this could be equivalent to a ~20% savings. But if you incorporate > NV in the low-end of the family, you probably do so across the entire family > since you want to use the same process, unless it is simple to nuke the > extra masks and get back to a process with the exact same costs (production > and characterization) of a vanlilla CMOS. > > And there is the issue of what size process technology you can use, and how > much overhead there is to incorporating NV. For example, if you want Flash > the smallest available process is maybe 0.13u (from what I've seen in the > press), so you take a big cost hit compared to using 90 nm. Also, you need > sense amps etc. as well as the bits, so once you through in a small bit of > memory, you might as well throw in a little more... > > And you have ask -- what are you gaining by incorporating an NV RAM? Is it > a cost reduction? Why would an FPGA and discrete NV RAM be more expensive > than an FPGA including NV RAM? Combining two dice into one larger die can > increase cost due to higher chance of defect. Is it for lower footprint? > Certainly there could be niches that would like the footprint reduction of > combining two functions into one, but is it large enough to support the > development and is it worth taxing all the other users who don't care about > footprint? > > Incorporating extra functionality in an FPGA is usually done because it > provides some additional advantage beyond absorbing external components. > Let's look for a second at incorporating volatile RAM. Altera clearly > thinks it is worth including medium-sized SRAM blocks (hence the 512 Kb rams > in Stratix and Stratix II). Designs often require one or two large RAMs > (off-chip) for long-term data storage, and a few medium RAMs for buffering > data (packet headers, line buffers in video apps), and many small RAMs for a > slew of applications including buffering. While discrete SRAMs are cheap, > you need to consume a lot of I/Os to talk to them, which burns power and > means you need to route a lot of traces. So bringing most RAM functions > on-chip can pay-off for the small and medium RAMs. More importantly, by > using multiple on-chip SRAMs you can achieve higher bandwidth and/or lower > latency than off-chip RAMs, so there can be a system performance advantage > as well. > > Yet Xilinx does not include SRAMs larger than 18 Kb in their parts. If the > market is divided on incorporating medium SRAMs, my guess is it's a long way > from thinking big NV rams are needed. > > Paul Leventis > Altera Corp. > >
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z