Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
> The issue is that the rules set is a very complex set of > rules, Why do you say it is VERY complex? I do not see a set of rules that defines what hardware Verilog constructs gets compiled down to as being VERY complex. I am only talking about in general, as in, what does a case statement compile down to? Not something that specifies EVERY possible case statement...which is clearly foolish much less impossible. Typically, a synthesizer will take Verilog and use the same type of hardware for certain types of case statements. >...but also on performance, area (a default > setting if you don't specify it), what the logic between the registers actually > is etc. Exactly, and if you were able to specify this, you could get the synthesizer to compile the HDL to the appropriate hardware (which, to some degree you can do), such as using the different techniques for state machines based on that info. > If they were simple rules, then yes, such a reference guide could be > generated. I believe you believe they are more complicated than they really are, or at least you believe I'm talking down to a level that I'm not. I'm only talking a base level of "complex" (as opposed to simple logic) constructs. Obiously, you don't have to define a = b or c... > Where the rules are as complex as they are, publishing the rules > would not only give away the crown jewels (after all, it is the set of rules and > application of those rules that differentiates synthesis tools, right), I don't know that I agree with that entirely. > but it > would also likele be very confusing to the consumer which means a much bigger > support cost. Or that... In fact, it can reduce support costs, becuase then you wouldn't have to fight with the tools to get the hardware you expected it to generate...as your expectations are laid out beforehand. > For the cases where you need a certain structure, structurally instantiate the > logic. You are pretty much (found a bug a while back where one tool was > optimizing into xilinx FDRE primitives) guaranteed to get what you put in. Understood, but then you are simply using the HDL as a netlister, and that, to me, is really a waste of a $30k+ tool. > If > it is not as critical, then you can let the tool do it as it pleases. Of course. > As I > indicated, there is some middle ground which you can cover by inserting keep > buffers to force certain signals in a combinatorial tree to appear at LUT > outputs. That severely limits the options the compiler has for optimizing your > code, so it serves as a way of forcing the outcome without resorting to > instanced LUTs (which can be a pain in the tusch). Understood, but I'm not convinced that's really all that great a solution...as it's really not in the realm of what I'm talking about, such as case statements. Regards, AustinArticle: 51751
"Eric Smith" <eric-no-spam-for-me@brouhaha.com> wrote in message news:qhsmvno8t8.fsf@ruckus.brouhaha.com... > "Austin Franklin" <austin@da98rkroom.com> writes: > > It HAS to be accurately predictible or it be full of bugs > > If you take some input, apply a series of transformations to it, and get > some other output, the transformations may all be well-defined (and > deterministic), and may be guaranteed to preserve some invariants, such > that the output from the process is semantically equivalent to the input. > Yet you can't easily predict what the output will be. The inability > to predict the overall transformation does not inherently make it buggy. I believe you aren't getting what it is I'm talking about. I am NOT talking about "mapping" logic. I am basically talking constructs such as in the case of a case statement. A spec for logic optimization, as I've clearly stated, isn't what I'm talking about. > > and give unpredictible results! > > If it's not accurately predictable, it's unpredictable. Yes, that's > a tautology. The fact that you can't predict it doesn't make it > unusable. Of course. > > > However if you change the input at all, > > > the output may change a lot in response to a subtle change because an > > > alteration in the way the rules are applied to the code. > > > > Yes, but the rules are random? Somehow aren't "known"? I'm not following > > you here, or agreeing with you. > > Of course they're not random. But when you apply enough non-random rules, > the overal behavior is not easily predicted. I don't see how that applies to what I'm talking about. AustinArticle: 51752
Kevin Neilson napisal(a): >FlexLM is evil. Ok, I believe it but first I would like to know if I should cut down windows and reinstall OS or there is simpler solution. >> Today I tried recompiling one of Altera projects using free MaxPlus >> and I discovered that FLEXlm says that time has been set back. >> Althoubg I did not believe in success I tried downloading new license >> and reinstalled MaxPlus. Of course it did not help. FELXlm website >> wasn't very usefull. >> >> -- >> Pozdrowienia, Marcin E. Hamerla >> >> "Płoń, płoń, płoń parlamencie, spali Cię ogień na historii zakręcie." > -- Pozdrowienia, Marcin E. Hamerla "Płoń, płoń, płoń parlamencie, spali Cię ogień na historii zakręcie."Article: 51753
Hi, I am building a 24 bit processor in a Virtex-E device. Should I use busses & TBUFs or MUXes to get data from one point to another? I would appreciate help from people who know well how hardware will be realized in the Xilinx devices. Until now I was using TBUFs. My processor has two data busses. There are about 10 components that read data from the busses. A few registers can read from only one bus, others can read from either bus, components such as the ALU read from both. Now I wondered if I'm better off using multiplexers and no TBUFs at all. I thought before that this is out of question because it not only uses up CLBs for the MUXes but also many routing resources. On the other hand, I already need 2x1-MUXes for bus selection at the inputs. 1) TBUFs or MUXes? What do you advice (to use up less silicon)? 2) Do TBUFs make the design slower? Thank you! DennisArticle: 51754
surely "DILEEP" <dileepjkurian@yahoo.com> Đ´ČëĎűϢĐÂÎĹ :d07c8bc2.0301202059.81a924d@posting.google.com... > hi, > i have around 150 registers(16 bit) in my design, can i use Block > Ram bits for registers. i am using xcv800 Xilinx fpga. > Thanking you > Dileep > Acl Hyderabad > IndiaArticle: 51755
I'm working a design on Xilinx Virtex2 FPGA, in the constraint file a port can be declared as "FAST" or "SLOW" like: NET "Data_out<0>" SLEW = FAST; What's the difference between the "FAST" and "SLOW", is "FAST" mean the signal come out from the pin faster than "SLOW"? Thanks for any advanceArticle: 51756
Hi, Does anyone know the quality of those tools thant translate a Matlab/simulink model into vhdl, ready to be programmed in a targetted fpga. I'm mostly talking about DSPbuilder(Altera) and SystemGenerator(Xilinx). A couple of years ago, tools like that were doing a horrible job. How are they now? Thanks DaveArticle: 51757
Hi I'm using a XC2V1000 -4 for a design which includes a number of 16x16 signed multiply accumulators (MAC's) with registered inputs and outputs (generated through Xilinx Core Generator v4.2i, update #2). When generating the MAC core using LUT's instead of the embedded 18x18 multipliers, post place and route timing reports say that the design will run at 125 MHz. When using the embedded multipliers with stepping level set to 0 (default for XC2V1000), the component delay for the MAC (TmultS0) is reported as 9.5 ns, giving an absolute maximum frequency of approx 105 MHz (obviously the final will be less than this due to routing delays). However, according to a previous post by Ray Andraka, "the original multiplies have a very slow path in them that limits performance to 130 Mhz or so with a pipelined multiplier with the added i/o registers in the fabric in a -4 part." Does anyone --Article: 51758
Sorry, I managed to post this message before I'd finished typing it. Here's the full version: Hi I'm using a XC2V1000 -4 for a design which includes a number of 16x16 signed multiply accumulators (MAC's) with registered inputs and outputs (generated through Xilinx Core Generator v4.2i, update #2). When generating the MAC core using LUT's instead of the embedded 18x18 multipliers, post place and route timing reports say that the design will run at 125 MHz. When using the embedded multipliers with stepping level set to 0 (default for XC2V1000), the component delay for the MAC (TmultS0) is reported as 9.5 ns, giving an absolute maximum frequency of approx 105 MHz (obviously the final will be less than this due to routing delays). However, according to a previous post by Ray Andraka, "the original multiplies have a very slow path in them that limits performance to 130 Mhz or so with a pipelined multiplier with the added i/o registers in the fabric in a -4 part." Can anyone explain the difference in Ray's figures to those reported for my design? I've tried setting the stepping level to 1 which reduces TmultS1 to 6.594 ns, thus giving an absolute maximum frequency of 151 MHz. However, according to the same above mentioned post of Ray's, "The enhanced multipliers get the speed up to over 200 Mhz". Once again, can anyone explain the discrepancy? I've also tried turning the RPM option on and off, and it seems that having a core RPM'ed makes the timing worse. Many thanks Sam Duncan "Sam Duncan" <damn_spam2001@yahoo.co.uk> wrote in message news:b0ivjf$pk5m3$1@ID-167554.news.dfncis.de... > Hi > > I'm using a XC2V1000 -4 for a design which includes a number of 16x16 signed > multiply accumulators (MAC's) with registered inputs and outputs (generated > through Xilinx Core Generator v4.2i, update #2). When generating the MAC > core using LUT's instead of the embedded 18x18 multipliers, post place and > route timing reports say that the design will run at 125 MHz. > > When using the embedded multipliers with stepping level set to 0 (default > for XC2V1000), the component delay for the MAC (TmultS0) is reported as 9.5 > ns, giving an absolute maximum frequency of approx 105 MHz (obviously the > final will be less than this due to routing delays). However, according to > a previous post by Ray Andraka, "the original multiplies have a very slow > path in them that limits performance to 130 Mhz or so with a pipelined > multiplier with the added i/o registers in the fabric in a -4 part." Does > anyone > > -- > > >Article: 51759
I managed to get it fixed by moving the constraints file to somewhere else, then deleting the project under Xilinx design manager and creating a new project, then copying the constaints back in. On the old project I ran P&R intially without any constraints so that I could get an idea of what the pin-names were. There must have been something that remembered that initial run, that routed VCO_70MZ via a GBUF pin. This must have been referenced in other routing attempts. Anyway, thank you for all your help & suggestions. I've learnt bit more over the last few days.Article: 51760
Marcin E. Hamerla <mehamerla@pro.onet.pl> wrote in message news:jbnp2v4r1a25q13i5km60ki9qcnmsorj89@4ax.com... > Kevin Neilson napisal(a): > > >FlexLM is evil. > > Ok, I believe it but first I would like to know if I should cut down > windows and reinstall OS or there is simpler solution. I would completely un-install Flexlm then use a registry cleaner (I've used RegCleaner from http://www.vtoy.fi/jv16/shtml/software.shtml before) to completely remove all traces of Flexlm from the registry. Re-install Flexlm and see if it works. This is a very basic effort at getting round this problem, I'm sure Flexlm has other methods of protection other than registry entries, but it's worth a try before you do a complete OS re-install. Nial. ------------------------------------------------ Nial Stewart Developments Ltd FPGA and High Speed Digital Design www.nialstewartdevelopments.co.ukArticle: 51761
Hello, I am looking for a used PAL programmer able to handle 16L8 / 20L8 PALs. I heard from people from this newsgroup that 16L8 and 20L8 are basically "vintage" stuff, pretty old, and old programmers should be easy to find and not expensive ( Mr. Andraka said "next to nothing"... :-) If someone feels good to part with and old 16L8/20L8 programmer, please contact me removing by "xxx" and "yyy" from my e-mail address. I am located in Italy. Thanks for your attention. RiccardoArticle: 51762
Dear Eric, Thank you for your answer, but I searched many times, and couldn't find any thing, can you send me a direct URL that contain a floating point ALU source in VHDL? Eric Smith <eric-no-spam-for-me@brouhaha.com> wrote in message news:<qhfzroee4w.fsf@ruckus.brouhaha.com>... > vaxent@my-deja.com (Davar Robdan) writes: > > I'm a VHDL learner, and looking for some VHDL source. > > Is there anyone who have a 32bit ALU with floating point written in > > VHDL? Or can you tell me where on the web I can find it? > > Yes, there is, and you can find it on the web at: > http://www.google.com/Article: 51763
Dennis McCrohan <mccrohan@xilinx.com> wrote in message news:<3E2C70A3.929A348D@xilinx.com>... > I'll second this notion - try using the Timing Analyzer GUI (timingan.exe) in > iSE 5.1i. Not an option I'm afraid. I'm using v4.2. Maybe I can give a better idea of what I'm trying to do. I'm part of a research group that is experimenting with automated ways of translating pure C through to a FPGA implementation. What I'm doing specifically is working on an automated way to take the feedback from timing reports to provide more information for the back end, which translates C to Handel-C automagically. Anyway, thanks for the feedback; I guess I'll continue working on my own parser. Shouldn't take more than a week or so; I was just hoping someone else had already done it :) Cheers, Richard Bannister Dept of Computer Science Trinity College DublinArticle: 51764
n order to get the performance numbers I quoted you need to a) use the multiplier with the internal pipeline register, b) immediately preceed and follow the multiplier with registers without any LUTs, and c) place those flip-flops in specific locations to get to the direct connections. The layout pattern required is not intuitive, you need to look in FPGA editor to determine hoe to place them. I think there is an app note on the placement too, but I don't know the number. I am not sure what the RPM'd core is doing or how it is laid out. Sam Duncan wrote: > Sorry, I managed to post this message before I'd finished typing it. Here's > the full version: > > Hi > > I'm using a XC2V1000 -4 for a design which includes a number of 16x16 signed > multiply accumulators (MAC's) with registered inputs and outputs (generated > through Xilinx Core Generator v4.2i, update #2). When generating the MAC > core using LUT's instead of the embedded 18x18 multipliers, post place and > route timing reports say that the design will run at 125 MHz. > > When using the embedded multipliers with stepping level set to 0 (default > for XC2V1000), the component delay for the MAC (TmultS0) is reported as 9.5 > ns, giving an absolute maximum frequency of approx 105 MHz (obviously the > final will be less than this due to routing delays). However, according to > a previous post by Ray Andraka, "the original multiplies have a very slow > path in them that limits performance to 130 Mhz or so with a pipelined > multiplier with the added i/o registers in the fabric in a -4 part." Can > anyone explain the difference in Ray's figures to those reported for my > design? > > I've tried setting the stepping level to 1 which reduces TmultS1 to 6.594 > ns, thus giving an absolute maximum frequency of 151 MHz. However, > according to the same above mentioned post of Ray's, "The enhanced > multipliers get the speed up to over 200 Mhz". Once again, can anyone > explain the discrepancy? > > I've also tried turning the RPM option on and off, and it seems that having > a core RPM'ed makes the timing worse. > > Many thanks > > Sam Duncan > > "Sam Duncan" <damn_spam2001@yahoo.co.uk> wrote in message > news:b0ivjf$pk5m3$1@ID-167554.news.dfncis.de... > > Hi > > > > I'm using a XC2V1000 -4 for a design which includes a number of 16x16 > signed > > multiply accumulators (MAC's) with registered inputs and outputs > (generated > > through Xilinx Core Generator v4.2i, update #2). When generating the MAC > > core using LUT's instead of the embedded 18x18 multipliers, post place and > > route timing reports say that the design will run at 125 MHz. > > > > When using the embedded multipliers with stepping level set to 0 (default > > for XC2V1000), the component delay for the MAC (TmultS0) is reported as > 9.5 > > ns, giving an absolute maximum frequency of approx 105 MHz (obviously the > > final will be less than this due to routing delays). However, according > to > > a previous post by Ray Andraka, "the original multiplies have a very slow > > path in them that limits performance to 130 Mhz or so with a pipelined > > multiplier with the added i/o registers in the fabric in a -4 part." Does > > anyone > > > > -- > > > > > > -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 51765
Depends on how many drivers there are on the bus. For small numbers, the muxes are faster. If you use the tristates, you'll want to floorplan carefully. The placer does a lousy job autoplacing the tristates, which kills the performance. In the virtexE, you'll have a pattern of two adjacent columns that can get on a bus interleaved with two adjacent ones that can't. RISC taker wrote: > Hi, > > I am building a 24 bit processor in a Virtex-E device. Should I use > busses & TBUFs or MUXes to get data from one point to another? I would > appreciate help from people who know well how hardware will be > realized in the Xilinx devices. > > Until now I was using TBUFs. My processor has two data busses. There > are about 10 components that read data from the busses. A few > registers can read from only one bus, others can read from either bus, > components such as the ALU read from both. > > Now I wondered if I'm better off using multiplexers and no TBUFs at > all. I thought before that this is out of question because it not only > uses up CLBs for the MUXes but also many routing resources. On the > other hand, I already need 2x1-MUXes for bus selection at the inputs. > > 1) TBUFs or MUXes? What do you advice (to use up less silicon)? > 2) Do TBUFs make the design slower? > > Thank you! > Dennis -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 51766
RISC taker <RISC_taker@alpenjodel.de> wrote: > Hi, > > I am building a 24 bit processor in a Virtex-E device. Should I use > busses & TBUFs or MUXes to get data from one point to another? I would > appreciate help from people who know well how hardware will be > realized in the Xilinx devices. Simply leave this decision to the Xilinx Software. There is a flag called "tx" in the map-Tool, which chooses the best implmentation for Tri-State Busses (Mux, Carry-Chain or Tri-State). CU Felix -- /"\ ASCII ribbon campaign against HTML mail and postings. \ / X "Ich will eigentlich keinen Streit - ich will nur Recht haben" / \ Thorsten Gunkel am 15.01.2000Article: 51767
Hi, I know Philips has got a ASIC for MPEG encoding. It is called Empress and Chip number is SAA 6752. Please dont think I am advertising for Philips. I know there are chips from Broadcom and Soni also. So you should not have any problem in getting a ASIC for MPEG encoding/decoding.Article: 51768
Ray Andraka <ray@andraka.com> writes: > I never said random. The issue is that the rules set is a very complex set of > rules, based not only on coding style, but also on performance, area (a default > setting if you don't specify it), what the logic between the registers actually > is etc. And timing constarints, I would hope. And thus the architecture. And speed grade. Homann -- Magnus Homann, M.Sc. CS & E d0asta@dtek.chalmers.seArticle: 51769
"Austin Franklin" <austin@da98rkroom.com> writes: > > The issue is that the rules set is a very complex set of > > rules, > > Why do you say it is VERY complex? I do not see a set of rules that defines > what hardware Verilog constructs gets compiled down to as being VERY > complex. I am only talking about in general, as in, what does a case > statement compile down to? Depends on architecture, doesn't it? And constraints. Homann -- Magnus Homann, M.Sc. CS & E d0asta@dtek.chalmers.seArticle: 51770
Davar, You need to be a little more tenacious than that. The number 1 return on google is: > Floating-Point HDL Packages: Documentation for VHDL floating po > Documentation for VHDL floating point packages. ... Here is some basic > documentation for > the VHDL floating point packages I have posted to > http://www.eda.org/fphdl. ... > www.eda.org/fphdl/hm/0011.html - 16k - Cached - Similar pages Granted, it takes you to the wrong subpage, however if you start working your way up you would have found the packages at: http://www.eda.org/fphdl Cheers, Jim Lewis -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Jim Lewis Director of Training mailto:Jim@SynthWorks.com SynthWorks Design Inc. http://www.SynthWorks.com 1-503-590-4787 Expert VHDL Training for Hardware Design and Verification ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Davar Robdan wrote: > Dear Eric, Thank you for your answer, but I searched many times, and > couldn't find any thing, can you send me a direct URL that contain a > floating point ALU source in VHDL? > > > Eric Smith <eric-no-spam-for-me@brouhaha.com> wrote in message news:<qhfzroee4w.fsf@ruckus.brouhaha.com>... > >>vaxent@my-deja.com (Davar Robdan) writes: >> >>>I'm a VHDL learner, and looking for some VHDL source. >>>Is there anyone who have a 32bit ALU with floating point written in >>>VHDL? Or can you tell me where on the web I can find it? >> >>Yes, there is, and you can find it on the web at: >> http://www.google.com/ >Article: 51771
Yes, you can store your registers in the BlockRAM, but you then have limited read and write access to them, only one or a few at a time. If you can live with that, BlockRAM is ideal. Peter Alfke ===================== DILEEP wrote: > hi, > i have around 150 registers(16 bit) in my design, can i use Block > Ram bits for registers. i am using xcv800 Xilinx fpga. > Thanking you > Dileep > Acl Hyderabad > IndiaArticle: 51772
"Peng Cong" <pc_dragon@sohu.com> schrieb im Newsbeitrag news:b0irt7$14lq$1@mail.cn99.com... > I'm working a design on Xilinx Virtex2 FPGA, in the constraint file a port > can be declared as "FAST" or "SLOW" > like: NET "Data_out<0>" SLEW = FAST; > What's the difference between the "FAST" and "SLOW", is "FAST" mean the > signal come out from the pin faster than > "SLOW"? Yes. The clock to output time is shorter as well as the transition time from HIGH/LOW and LOW/HIGH. This is usefull when doing high speed designs (like 133 MHZ SDRAM etc). Usually you try to work with slow outputs to reduce EMI and signal integrity problems. -- MfG FalkArticle: 51773
A good reference for particle physics instrumentation is the adverts in the Cern Courier; look at their web site: http://www.cerncourier.com/buyers/ The LeCroy CAMAC module with the Xilinx had ECL I/O. If you don't need ECL, there are a lot of PCI cards with various Xilinx parts on board. Paul Smith http://dustbunny.physics.indiana.edu/~paul In article <Pine.SOL.4.44.0301171815180.201-100000@mspacman.gpcc.itd.umich.edu>, Mu Young Lee wrote: >On Fri, 17 Jan 2003, John Larkin wrote: >> >> http://www.highlandtechnology.com/ >> >> What sort of stuff are you working on? > >I was specifically interested in their fast programmable logic CAMAC >modules which employed Xilinx FPGAs. I am not tied to CAMAC however. If >there is a more modern off-the-shelf equivalent that would do as well. > >Mu >Article: 51774
In article <833030c0.0301192000.24caa78b@posting.google.com>, Tom Hawkins <tom1@launchbird.com> wrote: >Hello, > >I'm pleased to announce the initial release of Confluence 0.1; >a new hardware design language created by Launchbird Design Systems, Inc. > >Confluence is a simple, yet high expressive language >that compiles into Verilog, VHDL, and C. > >Its implicit parallelism and modular style still feels like coding in Verilog >or VHDL, yet the features of the language provide a level of flexibility >unknown to either of the two. > >We are currently in the process of assembling a group of systems coded in >Confluence to act as both benchmarks and as tutorials for our customers. >The code generated from two of these systems has already been released to >OpenCores.org: > > http://www.opencores.org/projects/cf_fft/ > http://www.opencores.org/projects/cf_cordic/ > >To give you an idea of the typical code density, the bulk of CF_Cordic was >written in just under 100 lines of Confluence code and CF_FFT was only >twice that. > How about showing us the Confluence source code for these two examples? About all we can judge by looking at the VHDL/Verilog files is that they do indeed appear to be machine-generated. -- Caleb Hess hess@cs.indiana.edu
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z