Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On a sunny day (Tue, 30 Mar 2004 16:47:52 +0100) it happened Jonathan Bromley <jonathan.bromley@doulos.com> wrote in <9k4j601d8o0kgbbf9bqikfjo4sjqe2ge28@4ax.com>: > >Good to know that you are making interesting progress with >your Verilog-driven video processing. It's something that >has been close to my heart for some time :-) Did you get >anywhere with your R-2R ladder successive-approximation >converter in a Xilinx FPGA, using an LVDS receiver as >a comparator??? OK, we differ on DSP definition, for me DSP always means also some sequenctial processor, but perhaps not true. As for the r2r, it worked, it worked even better when I got the timing right, but I had these Philips TDA8708 chips. These have build in AGC, are 8 bits, and need 2 clamp pulses, or rather drive pulses. Once you go these right it will park the sync tip at a nice level. A bit tricky to use, and so I tapped the AD, and left the AGC for what it was. I DO use a simple diode clamp to get the sync tip a little stabilized. Actually (grin) there are 2 trimpots too hehe. JPArticle: 68226
When compiling a chip Quartus gives the following warning: > Warning: Converted TRI buffer to OR gate or removed OPNDRN > Warning: Converting TRI node=20 lpm_bustri_work_inrec_op_1:hi_data_bus_driver|lpm_bustri:U1|din[0] that=20 feeds logic to an OR gate And indeed the Tristate Buffer is replaced by an OR gate and the pin=20 cannot be used in tristate mode. When locating the message source Quartus shows a TDF (Altera AHDL=20 language) file, =A8lpm_bustri.tdf=A8 and there to the section: > VARIABLE > % Are the enable inputs used? % > IF (USED(enabledt)) GENERATE > dout[LPM_WIDTH-1..0] : TRI; > END GENERATE; > IF (USED(enabletr)) GENERATE > din[LPM_WIDTH-1..0] : TRI; > END GENERATE;enabledt to the line: > din[LPM_WIDTH-1..0] : TRI; There is no more exact explanation. The Tristate Buffer is generated in VHDL as follows: hi_data_bus_driver : lpm_bustri GENERIC MAP (LPM_WIDTH =3D> 30 ) PORT MAP (data =3D> int_dnio, enableDT =3D> NOT n0_dir, enableTR =3D> n0_dir, result =3D> dnio_inbus, tridata =3D> dnio_rec ); dnio_rec is the FPGA pin, int_dnio and dnio_inbus are internal data buses= =2E This type of instantiation works well on other design. The VHDL is synthesized by Synplify. My question is why this happens and what can be done against? Where can I find docs that better describe this behavior? Many thanks, Janos Ero CERN Div. EPArticle: 68227
My work is predominantly in DSP in areas like radar signal processing, sonar, digital wireless, imaging etc. All involve lots of math, sometimes very heavy duty math. I think that qualifies it as DSP. Most do NOT have sequential processors in them, which would disqualify as DSP under your definition. Jan Panteltje wrote: > > OK, we differ on DSP definition, for me DSP always means > also some sequenctial processor, but perhaps not true. -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 68228
The answer is simple: You don't have tristate busses inside (Altera) FPGAs. Tristate buffers are substituded by MUX. Martin -- ---------------------------------------------- JOP - a Java Processor core for FPGAs: http://www.jopdesign.com/ "erojr" <janos.nojunk.nospam.ero@cern.nojunk.nospam.ch> schrieb im Newsbeitrag news:c4cath$rep$1@sunnews.cern.ch... When compiling a chip Quartus gives the following warning: > Warning: Converted TRI buffer to OR gate or removed OPNDRN > Warning: Converting TRI node lpm_bustri_work_inrec_op_1:hi_data_bus_driver|lpm_bustri:U1|din[0] that feeds logic to an OR gate And indeed the Tristate Buffer is replaced by an OR gate and the pin cannot be used in tristate mode. When locating the message source Quartus shows a TDF (Altera AHDL language) file, ¨lpm_bustri.tdf¨ and there to the section: > VARIABLE > % Are the enable inputs used? % > IF (USED(enabledt)) GENERATE > dout[LPM_WIDTH-1..0] : TRI; > END GENERATE; > IF (USED(enabletr)) GENERATE > din[LPM_WIDTH-1..0] : TRI; > END GENERATE;enabledt to the line: > din[LPM_WIDTH-1..0] : TRI; There is no more exact explanation. The Tristate Buffer is generated in VHDL as follows: hi_data_bus_driver : lpm_bustri GENERIC MAP (LPM_WIDTH => 30 ) PORT MAP (data => int_dnio, enableDT => NOT n0_dir, enableTR => n0_dir, result => dnio_inbus, tridata => dnio_rec ); dnio_rec is the FPGA pin, int_dnio and dnio_inbus are internal data buses. This type of instantiation works well on other design. The VHDL is synthesized by Synplify. My question is why this happens and what can be done against? Where can I find docs that better describe this behavior? Many thanks, Janos Ero CERN Div. EPArticle: 68229
I have a complex Virtex2 4000 project that was compiled for and passed timespec (barely) for a -6 speed. The same project seems to run just fine on a -4 speed. What temperature do I need to keep the -4 at to ensure that it would be 100% compatible with a -6? What is the default temperature for the timespec analysis? Thanks for your time.Article: 68230
On a sunny day (Tue, 30 Mar 2004 14:57:39 -0500) it happened Ray Andraka <ray@andraka.com> wrote in <4069D133.A4C05120@andraka.com>: >My work is predominantly in DSP in areas like radar signal processing, sonar, >digital wireless, imaging etc. All involve lots of math, sometimes very heavy >duty math. I think that qualifies it as DSP. Most do NOT have sequential >processors in them, which would disqualify as DSP under your definition. OK, yes, my definition not good yours better:-) But then ANYTHING that processes digital signals is a 'digital signal processor'? But that would include a normal processor too... DSPs were (long time ago) processors with some special hardware, but still programmable. I for sure respect your work in digital signal processing, and with FPGA! JPArticle: 68231
Hmm,... I build something like that in virtex,.. works well. Jitter <= 1ns, PLL locks perfect. It can detect anything from 15Khz to 100Khz line rate, interlace or non interlace...Requires 100MHZ sampling clock and composite signal need to be clamped/limited to LVTTL before entering virtex.Article: 68232
On a sunny day (Mon, 29 Mar 2004 10:25:56 +1000) it happened Allan Herriman <allan.herriman.hates.spam@ctam.com.au.invalid> wrote in <n3re60dh0tg7hkms19vi78un1tlbs7ogqj@4ax.com>: >> >>Yes I use Iverilog too, it is fast, clean, and Linux command line. > >Fast? It was about ten times slower than Modelsim, the last time I >ran a benchmark. > >I notice you didn't mention "correct". For me, correctness is the >most important attribute of a simulator. > >Regards, >Allan. Oh really? Well, fast -as opposed to horibly slow- using webpack (or whatever it is called). I have never used modelsim so I dunno. But I can in Linux (and did) write VERY complicated stuff in Iverilog, get all the algo right, in a flash. Then I take it to you know what pack and the little windows and stuff and anoying stuff, and wait and wait and wait, but it will work. Al the time knowing that my algos are correct helps user confidence when in webslack ;-) JPArticle: 68233
Actually DSP microprocessors sort of shaghai'd the term DSP. DSP was around for a long time before microprocessors. DSP work in the '60s and '70s was all custom hardware with little or any sequential processors in it. Signal Processing generally refers to doing some math on signals captured from the environment in order to extract or refine the useful information in it. The 'digital' in front of that just means it is done in the digital domain rather than the traditional analog domain (signal processing was all analog before somewhere around 1960, and predominantly analog all the way up to around 1980). -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 68234
On Tue, 30 Mar 2004 15:14:41 -0500, Brannon King wrote: > I have a complex Virtex2 4000 project that was compiled for and passed > timespec (barely) for a -6 speed. The same project seems to run just fine on > a -4 speed. What temperature do I need to keep the -4 at to ensure that it > would be 100% compatible with a -6? What is the default temperature for the > timespec analysis? Thanks for your time. Is this a onesy or are you planning to put this into production? If this is a product you really should use a -6 or fix your design so that it will run in a -4. In theory you could super cool the device but that's hardly practical, and I'm sure that Xilinx will tell you that you're on your own if you try to ship a system like that.Article: 68235
HDLmaker 7.1.2 is now available at http://www.polybus.com/hdlmaker/users_guide/ HDLmaker generates Verilog and VHDL code, scripts and project files for FPGAs and ASICs. It supports Synplify, Precision and XST as well as most common Verilog simulators like NCverilog, VCS and Modelsim. It's free/open source and licensed under a BSD style license.Article: 68236
Brannon King wrote: > I have a complex Virtex2 4000 project that was compiled for and passed > timespec (barely) for a -6 speed. The same project seems to run just fine on > a -4 speed. What temperature do I need to keep the -4 at to ensure that it > would be 100% compatible with a -6? What is the default temperature for the > timespec analysis? Thanks for your time. I'd assume the chips are identical, but the specifications are tighter. The chips are selected. You have to make sure that you don't overheat the chips anyway. As to the temperature, that should be answered in the datasheet. I'd assume a -4 can replace a -6 in all cases at all temperatures. Rene -- Ing.Buero R.Tschaggelar - http://www.ibrtses.com & commercial newsgroups - http://www.talkto.netArticle: 68237
We test our chips in such a way that they are guaranteed to meet the datasheet parameters in the specified temperature and voltage ratings. In most cases, worst-case performance is at min voltage and max temperature, defined as 85 degr junction for commercial parts. In the "olden days", we also mentioned that delays vary by +0.35% per degree C worst case when you calculate delay increase at higher temperature, or by -0.25% when figuring out delay improvement at lower temperature. Nowadays, there extrapolations are not valid anymore, since the circuitry has become more exotic. In some areas we use sophisticated compensation techniques, that play havoc with the simplified assumptions. And transistor delays and metal interconnect delays never behaved identically. So we discourage our users from extrapolating delay parameters for temperatures outside the guaranteed ranges of 0 to 85 junction temp for commercial, and the -40 to 100 junction for industrial. You are on your own when you operate outside those limits. Peter Alfke, Xilinx Applications > From: "Brannon King" <bking@starbridgesystems.com> > Organization: Concentric Internet Services > Newsgroups: comp.arch.fpga > Date: 30 Mar 2004 15:14:41 EST > Subject: speed vs. temperature > > I have a complex Virtex2 4000 project that was compiled for and passed > timespec (barely) for a -6 speed. The same project seems to run just fine on > a -4 speed. What temperature do I need to keep the -4 at to ensure that it > would be 100% compatible with a -6? What is the default temperature for the > timespec analysis? Thanks for your time. > >Article: 68238
Hello, I'm plugging away climbing the Altera DSP Builder learning curve. Right now I'm struggling with a problem that's beyond my level of VHDL knowhow -- I'd sure appreciate some tips. I made a VHDL component in Simulink, synthesized it, simulated it in both Simulink and ModelSim. So far so good. I can also synthesize it in Quartus. Problem is I'm getting a ton of warnings like this one: Warning: VHDL Use Clause warning at DSPBUILDER.VHD(93): more than one Use Clause imports a declaration of simple name std_logic_2d -- none of the declarations are directly visible There are a lot of entities that have Use clauses like this: library dspbuilder; use dspbuilder.dspbuilderblock.all; library lpm; use lpm.lpm_components.all; And both packages lpm_components and dspbuilderblock define: type STD_LOGIC_2D is array (NATURAL RANGE <>, NATURAL RANGE <>)of std_logic; So I think I understand where the warnings are coming from. (In fact the report file from Simulink>SystemBuilder synthesis shows the same warnings.) What I don't have a clue about is 1. What are the implications of "none of the declarations are directly visible" ? 2. Other than the large number of warnings, should I care about them ? 3. The Quartus Help talks about restricting the scope of the type declarations. How do I do that ? 4. Is it a good or bad idea to edit the dspbuilder and lpm_components file to take out the common definitions and put them into, say, a PackageCommon ? 5. Otherwise, what's the right approach for me to be taking ? Thanks, -rajeev-Article: 68239
"Hendra Gunawan" <u1000393@email.sjsu.edu> wrote in message news:<c4a458$6gkqf$1@hades.csu.net>... > "Peter Sommerfeld" <petersommerfeld@hotmail.com> wrote in message > news:5c4d983.0403271458.43ee7ad9@posting.google.com... > > Unfortunately the AHDL > > stuff could not be testbenched though > > Is that why Altera MaxPlus II does not support testbench for Verilog either? > Other tools support testbench, MaxPlus II doesn't! MaxPlusII is not a simulation tool! You need to use something like ModelSim to simulate your Verilog sources. -aArticle: 68240
In the ISE tools, the timing reports and timing simulation netlists are generated by default using the worst-case paramters for a commercial-grade part which is 85 degrees C and worst case Vcc (1.425 V for the case of Virtex2) in the speedgrade specified. If you can guarentee you can operate at better teperature and/or voltage conditions, you can tell the ISE software a different worst-case temperature or voltage within the operating ranges for the device and it will calculate to those values giving better numbers. While this will give you better numbers, this will probably not bridge the gap between two speedgrades (~30% speed difference). If you would like to give a try though you can go two ways about this, use the run you have or kick off a new run with the updated parameters. To use the implementation you have, simply open up Timing Analyzer, change the speed grade, specify the new temperature and voltage information (under the options tab) and run the analysis. If you are barely making it in a -6 you will likely still be missing timing but you can at least see by how much. A better way to do this is to re-implement the design with the changed speedgrade, temperature and voltage specified at the beginning. To do that, add to your UCF the new voltage and temperature information, change the speedgrade and re-run map and place and route. The benefit of re-running the software is that the software may make different choices in mapping, placement and routing, it you tell it up front these conditions and could give you a better result than if you simply re-use the result generated form a worst-case -6 run. As for what you are seeing in the lab, the timing reports are generated for worst-case conditions and many times the chip can and will operate faster than what the timing analyzer reports especially when not run at using the worst-case parameters however there is no guarentee that every chip will work properly at all times when it is being "over-clocked" as it sounds you have done. It is best to do dilligent analysis and stay within that reported timing when opertating the FPGA to have a reliable design. This brings me back old memories of when I used to overclock my 486 processor and had a fast PC (for the time) but a somewhat unstable machine, especially when the room heated up in the summer. -- Brian Brannon King wrote: > I have a complex Virtex2 4000 project that was compiled for and passed > timespec (barely) for a -6 speed. The same project seems to run just fine on > a -4 speed. What temperature do I need to keep the -4 at to ensure that it > would be 100% compatible with a -6? What is the default temperature for the > timespec analysis? Thanks for your time. > >Article: 68241
On Tue, 30 Mar 2004 21:13:20 GMT, Jan Panteltje <pNaonSptaemltje@yahoo.com> wrote: >On a sunny day (Mon, 29 Mar 2004 10:25:56 +1000) it happened Allan Herriman ><allan.herriman.hates.spam@ctam.com.au.invalid> wrote in ><n3re60dh0tg7hkms19vi78un1tlbs7ogqj@4ax.com>: >>> >>>Yes I use Iverilog too, it is fast, clean, and Linux command line. >> >>Fast? It was about ten times slower than Modelsim, the last time I >>ran a benchmark. >> >>I notice you didn't mention "correct". For me, correctness is the >>most important attribute of a simulator. >> >>Regards, >>Allan. >Oh really? >Well, fast -as opposed to horibly slow- using webpack (or whatever >it is called). >I have never used modelsim so I dunno. >But I can in Linux (and did) write VERY complicated stuff in Iverilog, >get all the algo right, in a flash. My experience with Iverilog was very different. I found that I was spending more time finding bugs in the simulator than finding bugs in my design, so I decided to stop using it after about a week. I reported a few of the bugs, and last I checked there were still some outstanding. These were simple bugs; see for example http://www.icarus.com/cgi-bin/ivl-bugs/incoming?id=936 It is also the slowest HDL simulator I have ever used. (I haven't used the crippled 'free' versions of Modelsim, however.) Regards, Allan.Article: 68242
"Andy Peters" <Bassman59a@yahoo.com> wrote in message news:9a2c3a75.0403301532.30d1ec02@posting.google.com... > MaxPlusII is not a simulation tool! > > You need to use something like ModelSim to simulate your Verilog sources. Actually, it has a built in Simulation tool. A really crappy one, I must say! I currently used it for my project because my professor forced us to use this Jurassic Tool. MaxPlus II clumsy user interface and its inability to use a testbench make it very difficult for me to debug my circuit. Every signal has to be entered by dragging the mouse in the wave form or by writing a text file while every signal has to be written explicitly without any support of loop or any other control of data flow whatsoever. If I have a choice, I would have used ModelSim XE with Xilinx XST synthesis tool. Those are much better tools! HendraArticle: 68243
Peter Alfke wrote: > We test our chips in such a way that they are guaranteed to meet the > datasheet parameters in the specified temperature and voltage ratings. In > most cases, worst-case performance is at min voltage and max temperature, > defined as 85 degr junction for commercial parts. > In the "olden days", we also mentioned that delays vary by +0.35% per degree > C worst case when you calculate delay increase at higher temperature, or by > -0.25% when figuring out delay improvement at lower temperature. Nowadays, > there extrapolations are not valid anymore, since the circuitry has become > more exotic. In some areas we use sophisticated compensation techniques, > that play havoc with the simplified assumptions. And transistor delays and > metal interconnect delays never behaved identically. So we discourage our > users from extrapolating delay parameters for temperatures outside the > guaranteed ranges of 0 to 85 junction temp for commercial, and the -40 to > 100 junction for industrial. You are on your own when you operate outside > those limits. There's an interesting opinion piece in a recent issue of IEEE Computer, Bob Colwell (formerly Intel's Pentium 3 chief architect) writes about "The Zen of Overclocking". Basically the point is that chip manufacturers must quote worst-case performance figures that they can guarantee. Consumer PC overclockers (and FPGA overclockers?) are just exploring in the margins of more or less (un)likely failure modes. But as Peter says, you're on your own! John As a footnote, I got 2 years out of a particular Intel Celeron chip that was rated at 300MHz, but performed flawlessly at 450Mhz. Cheapest way I ever jumped ahead on the technology curve!Article: 68244
> say! I currently used it for my project because my professor forced us to > use this Jurassic Tool. Hendra, You're just too young. :-) MaxPlus II was a rock'n tool for its time. Very well received by the engineering community. All that praise and I'm a Xilinx fan. Hmm... does that make me a Jurassic dinosaur? ;-) MattArticle: 68245
"Matt" <bielstein2002@comcast.net> wrote in message news:z_pac.40715$w54.264141@attbi_s01... > Hendra, > > You're just too young. :-) MaxPlus II was a rock'n tool for its time. And what year was it? > Very > well received by the engineering community. At that time may be! But why bother using outdated tool while you can have a better one? I don't mind using Altera FPGAs, but the tool must be something other than MaxPlus II or Quartus. I just don't like any tool from any vendor that does not support testbench. Specifying your inputs by dragging your mouse instead of writing a code is just dumb! It may work for very small design, but for large design, it won't work! To my knowldege, due to the nature of synthesis, none of the synthesis tool supports testbench, including Xilinx XST. But Xilinx allows me to use their free ModelSim simulator to be incorperated with their synthesis tool. While Altera, from my understanding, doesn't have the free version of their ModelSim. HendraArticle: 68246
Jonathan Bromley <jonathan.bromley@doulos.com> wrote in message news:<0jri601cpqp24dr9odcocssdjkl0177mp3@4ax.com>... > On Tue, 30 Mar 2004 18:01:23 +0530, Anand P Paralkar > <anandp@sasken.nospam.com> wrote: > > > Consider two asynchronous blocks each generating data > > at rates R1 and R2 such that R1 > R2. Calculate the depth > > of a FIFO required between these two blocks so that there > > is no data dropped. No feedback/handshake mechnism should > > be assumed between the two blocks. <snip> > FIFOs are good when you have "bursty" data transfer, > so that the long-term data rate is slow enough for the > slower receiver to catch up during idle times. If > that's what you mean, it seems to me that the calculation > is fairly simple. Given a burst of N data items transmitted > at the higher rate R1, and received at the slower rate R2: > > burst duration = N/R1 > items received in that time = (N.R2)/R1 > backlog that must be stored in FIFO = N - (N.R2)/R1 > = N.(R1-R2)/R1 > Any sensible designer will then make the FIFO somewhat > longer than this, to accommodate any latency or response > time in the receiver. Given a response time T, you need > an additional T.R1 locations in the FIFO. > > It's also clear that the receiver will need some time > to consume this backlog, so the idle time between bursts > must be long enough; the minimum time is of course > mop-up time = backlog/R2 = N.(R1-R2)/(R1.R2) This is well and good if you can enforce mop-up time back to the transmitter. But you may not be able to. Then you'll want some additional margin to allow for the probabilities of additional bursts arriving before the first has been digested. The statistical properties of the source are the key, wouldn't you agree ? <snip> Regards, -rajeev-Article: 68247
Hi folks!! Now ,I am designing a real-time visual tracking system based on FPGAs. The images are captured by the CCD camera, and we do edge detection by using (Sobel-mask) 2D convolver. We also use two consecutive image frames I(k) and I(k-1) to subtracted pixel by pixel ,in order to find out the moving object. A "Moving Edge" is include by doing a logic AND operation between the subtracted image and the edge image(obtained by Sobel-mask)of the current frame. After finding out the "Moving Edge" we must to extract the object's shape by using Active Contour Model(or snake). Now I have implemented the "Moving Edge" detection function on a Xilinx FPGA.The next step is to design the "Snake-Based Outline Extraction" function block. I've found a lot of reference papers on the Google about the "active contour model" and finally I want to choose two methods--> One is Greedy algorithm based snake-model and the other one is Gradient Vector Flow (GVF)based algorithm. I wonder which one is more suitable for FPGA based architecture design? Could anyone can give me some recommendations or you have any other good ideas to design the object outline extraction function on FPGA..?? Thanks a lot!!Article: 68248
You didn't mention your image size or pixel rate. Those are a factor in determining the best approach. YunghaoCheng wrote: > Hi folks!! > Now ,I am designing a real-time visual tracking system based on FPGAs. > > The images are captured by the CCD camera, and we do edge detection > by using (Sobel-mask) 2D convolver. > We also use two consecutive image > frames I(k) and I(k-1) to subtracted pixel by pixel ,in order to > find out the moving object. > > A "Moving Edge" is include by doing a logic AND operation between the > subtracted image and the edge image(obtained by Sobel-mask)of the > current frame. > > After finding out the "Moving Edge" we must to extract the object's > shape > by using Active Contour Model(or snake). > > Now I have implemented the "Moving Edge" detection function on > a Xilinx FPGA.The next step is to design the "Snake-Based Outline > Extraction" > function block. I've found a lot of reference papers on the Google > about the > "active contour model" and finally I want to choose two methods--> One > is Greedy algorithm based snake-model and the other one is Gradient > Vector Flow (GVF)based algorithm. > I wonder which one is more suitable for FPGA based architecture > design? > Could anyone can give me some recommendations or you have any other > good ideas > to design the object outline extraction function on FPGA..?? > > Thanks a lot!! -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 68249
Martin Schoeberl wrote: > The answer is simple: You don't have tristate busses inside (Altera) > FPGAs. Tristate buffers are substituded by MUX. There are no tristate buses inside. The only tristate bus is directly=20 connected to the pins, in the lpm definition this is the pin name. The=20 internal buses have only one source, in the case of the =A8result=A8 this= is=20 the tristate buffer. Thanks for the help, but this is not the case. Janos Ero CERN Div. EP
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z