Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Magnus Homann wrote: > [Rumours] > Wasn't the Ariane 5 suppsoed to have different SW for some controlling > functions, but they run out of time, and didn't implement it. The > results are known. > [End Rumours] Hi, I think perhaps that you are referring to the first flight of the Ariane 5 which failed as a result of a software error. If memory serves, they used a subsystem that was "inherited." In flight, with real data for this new launch vehicle, there was an overflow condition which was, well, bad. There are other examples of software errors similar to this, they are still happenning, but this was in a critical system and the vehicle was lost. If you want a more detailed response, I can look up the particulars. Have a good evening, ---------------------------------------------------------------------- rk The 20th Century will be remembered, stellar engineering, ltd. when all else is forgotten, as the stellare@erols.com.NOSPAM century when man burst his terrestrial Hi-Rel Digital Systems Design bounds. -- Arthur SchlesingerArticle: 21676
In article <ltzori3iru.fsf@mis.dtek.chalmers.se>, Magnus Homann <d0asta@mis.dtek.chalmers.se> wrote: (snip) > > [Rumours] > Wasn't the Ariane 5 suppsoed to have different SW for some controlling > functions, but they run out of time, and didn't implement it. The > results are known. > [End Rumours] > > Homann > -- > Magnus Homann, M.Sc. CS & E > d0asta@dtek.chalmers.se > AFAIK, the problem was that a controller for a smaller rocket was reused for the Ariane 5, and they loaded the software for the small rocket by mistake. The rocket turned too fast, lost control, and someone punched the big red button. This was not a system design problem, it was a build configuration management problem. Diversity might not have helped in this case, because they still might have grabbed all of the different software from the "small rocket" configuration... -- Greg Neff VP Engineering *Microsym* Computers Inc. greg@guesswhichwordgoeshere.com Sent via Deja.com http://www.deja.com/ Before you buy.Article: 21677
In some space critical applications the whole system is dual or triple rudundant all they way down to the power converts (converts the ~48 - 70 Volt down to what ever voltage is needed). Most applications that do not want to have a single point failure in an fpga will usually have two or more fpga to perform the function each powered by a seperate power converter. Now you have to pick a fpga that has cold spareable pads (meaing that they can withstand a voltage when they are powered down) if you plan to switch to a redundant fpga when you detect an error. But you are still left with a single point failure on the actual circuit board and backplane if the fpga is an interface to the backplane. What you normally have to do is to think about all the failure modes that can happen and then decide on how much engineering you want to put into prevent each failure mode. The more redundant a system is the larger and more costly it is. So depending on weight, power, size, cost and thermal considerations you have to weight which failures are acceptable and which are not. As a result of the analysis you will be able to decide if you need redundant buses, redundance fpgas and redundand boards. I've worked on systems where we can switch the whole processing system over to a redundant system if a failure is detected in one of the parts. And I've worked on others where is good enough to be able to just switch out the part by switching power converters. Have fun with the analysis EDM wrote: > Ok, I think I need to clarify: > > "clue point" stays for "key point" -- simple mental error, as Greg > Alexander <galexand@sietch.bloomington.in.us> suggested. > > and "redound" stays for "make redundant". > > Sorry for English. I just translated from italian:-) > > I hope that now somebody could help me. > > edemarchi@hotmail.comArticle: 21678
Hi, Let's see next URL.. This is workaround. http://support.xilinx.com/techdocs/7323.htm Set the default Global Buffer to DONT_USE. <spyng@my-deja.com> wrote in message news:8bp2hn$eaq$1@nnrp1.deja.com... > hi, > did it work?! where do you set it ? > > I have try to set the same thing (don't use) in the GUI constraint > editor for the FPGA express, but when the design is optimze , it is map > to a BUFGP. > than when Translate will warning and map with error. > mypin loc constraint is set in the *.ucf file. > > I will try it again, any other special thing that you did? ThanksArticle: 21679
Remember that not only the SEU events are critical but the total dose is very critical. Usually the total dose is very low on fpgas which dictates that the life of the part may be very short. rk wrote: > Ray Andraka wrote: > > > Kate, > > > > The configuration 'SRAM' in SRAM based FPGAs is not what you would normally > > consider an SRAM cell, rather it is a D register that is considerably more > > robust than the D registers in your design and orders of magnitude more robust > > than the SRAM hanging off the microprocessor. The readback capability can be > > exploited to effect a continuous non-invasive health monitoring so that a reload > > can be done when the configuration does get upset. I've got a current Virtex > > design that will be going into space in a year or two. > > Some general babble, responding to some comments in a variety of posts. > > Here's some data, from Los Alamos National Labs, presented at MAPLD 1999, for Virtex > devices: > > LETth Saturated X-Sec > MeV-cm^2/mg cm^2/bit > > CLB 5.0 6.5 x 10^-8 > LUT 1.8 21.0 x 10^-8 > BRAM 1.2 16.0 x 10^-8 > > The data for the XQR4000XL series (Lockheed-Martin took the data) is not that > different from that above. Looking at the curve, it appears that the Virtex has a > smaller cross-section per bit. > > For these parameters (for those not familiar with them) a high LET threshold is > desirable. All of these values are considered low and make the devices susceptible > to upsets by protons, a threat in low earth orbits. The saturated cross-sections > are relatively low per bit for a commercial/military grade device (good); note that > one must multiply this by the number of bits to get the device cross-section. Of > course, similar to the analysis of processors, upsets in some bits may simply be a > don't care or be of no significance to either function or reliability; estimating > that accurately is difficult but these numbers could serve as an upper bound. Upset > rates would be dependent on where one is flying and the space weather. In general, > for a device in this class of hardness and size, it would be assumed that upsets > would be a rather common occurrence and one of the variety of methods for dealing > with this would be used. The suitability of a particular method would be dependent > on the application, the system design, and various reliability requirements. These > vary all over the place so no general statement could be made. > > ---------------------------------------------------------------------- > rk History will remember the twentieth > stellar engineering, ltd. century for two technological > stellare@erols.com.NOSPAM developments: atomic energy and > Hi-Rel Digital Systems Design space flight. -- Neil Armstrong, 1994Article: 21680
Does anybody know about any decent quantitative comparison between using VHDL at RTL level opposed to the use of structural VHDL with proper floorpalanning, for targeting FPGA (Speed, area)?? Cheers.Article: 21681
Zoltan Kocsi wrote: > Ray Andraka <randraka@ids.net> writes: > > > Rickman wrote: > > > > > But I think my point is still valid. I doubt that anyone would expect > > > Xilinx to provide such support. It would be a far reach of the mind for > > > a user to expect anyone to support the operation of tools that they did > > > not provide. Again, the analogy would be like asking Intel to support > > > the GNU tools. > > > > > > > Not really, there's a fundamental difference. When you buy an intel processor, you > > know exactly what it's function is and you know the chip works. With an FPGA, you > > have a hardware design in there too, which from what I have seen is more often than > > not a poor fit to the FPGA. That invariably generates the "you say the chips run at > > XX MHz, but I can't get my design to run even at XX/10 or 20" calls. > > When I buy a microcontroller, its function is not determined. It can be > turned to a washing machine controller or a television remote control > or a talking Barbie doll or a pocket calculator or whatever. It depends > on the exact bit pattern stored in its program memory. Very much like > an FPGA chip, which can be turned into all sorts of things by varying > the bit pattern in its program memory. > Well, no. It's application is not determined, but it's function is. It is a rather complex state machine that executes a limited number of different instructions, generally in a serial fashion. The hardware is what it is and you can't change it no matter what you do to the processor (well, putting power on it backwards to change it into a lump of epoxy encapsulated glass doesn't count). As a result, the permutations of how it gets used in a system are pretty well limited. The timing of the signals, the functions of the pins etc all remain pretty much the same (some pins may be programmable for more than one function, but you can't move a pin to another location etc). All of this constrains the range of applications much more so than what you have with an FPGA. It also allows the vendor to get away with a relatively small number of applications notes discussing how it gets used in a system. An error in the microprocessor instruction stream generally will not damage the processor, and each instruction is more or less stand-alone. An FPGA bitstream sequence very is very tightly coupled to itself, in that 1) A wrong bitstream can burn up the device in many families, 2) the bitstream as a whole has to be internally consistent to make the connections between logic blocks. > > The operation of a microcontroller might be more pre-defined than that of > an FPGA but it is not fundamentally different. It's a piece of HW which > can be turned, by means of some code, into a function-specific unit. > The code or bitstream is usually derived from a higher level description > of the needed functionality using CAE tools in both cases. > The difference is that microcontroller vendors are quite open about > the "bitstream" format. Thus, you can handcraft bitstreams if you like > and you also have a choice of tools. FPGA vendors chose to lock all the > doors and limit your choice of tools and ways when determining the final > function of their chips. > > I think it is more of a cultural issue than a technical one. FPGAs are > descendants of ASICs where closed doors are normal, so are expensive > tools, support engineers at your site and so on. FPGAs were born with > an infrastructure, that is, the tools and the computers that run them > were already in existence. It started that way and it remained that way. > The CPU world was born without the infrastructure, there were no computers > on every engineer's desk for there were no processors to build them from > and thus the vendor *had* to give away all the info to make it possible > for the customer to hand-assemble the code and burn it into those HUGE > 2KB PROMs. It just became the norm that with a CPU you get the instruction > encoding info together with anything else that may or may not be relevant > to your design. It become also customary that you get the tools wherever > you want to, the vendor delivers you chips. If they offer you a compiler > then it is just a courtesy act. On the other hand, they don't support > you if you have software problems, and that's the way it should be, IMHO. > They guarantee that the silicon does whatever they said it would, > how do you create the bitstream is out of their domain. > > FPGA vendors apparently say that they must support everything on > Earth that can generate a bitstream for their chips and thus it is > just economical to keep the number of such things on the absolute > minimum. That's their decision, it is not a law of nature in my > opinion. They differ from the processor bunch because they want to > differ and not because they are inherently different. > > If Intel says that "The Pentium XYZ can do a 3D mapping in software > in less than N us" and you write some surface mapping using the > "Graphics algoritms for dummies" textbook, compile with a compiler > you rolled yourself, and it's just dog slow, would you go to > Intel screaming ? No. > If the FPGA databook says that the chip is capable of 250MHz > 16 bit sync counting and you create a bitstream with some home > made tools compiling your counter from "The idiot's guide to > logic design" e4xamples section and your counter can't do more > than 10MHz, I'd say you have no more ground to call the FPGA support > than you had to call Intel. > > With some exageration, instead of saying that you can call them if > you have problems with *their* FPGA tools, they act generously and > say that you can call them if you have any problem with any FPGA tool. > Of course, they first make sure that there could not possibly be any > other tool ... > > Zoltan > > -- > +------------------------------------------------------------------+ > | ** To reach me write to zoltan in the domain of bendor com au ** | > +--------------------------------+---------------------------------+ > | Zoltan Kocsi | I don't believe in miracles | > | Bendor Research Pty. Ltd. | but I rely on them. | > +--------------------------------+---------------------------------+ -- -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email randraka@ids.net http://users.ids.net/~randrakaArticle: 21682
Perhaps your not toggling GSR. http://support.xilinx.com/techdocs/5009.htm Taras Zima wrote: > > In our design the result of Verilog RTL simulation does not agree with > simulation of the gate level file. We are using Xilinx SpartanXL device > and Modelsim Verilog simulator (Xilinx edition). We have registers with > chip enable inputs and muxes generated by Xilinx Core Generator. The > problem is that after writing to a register and then reading from it we > are getting unknown values. We do not think that there is a timing > problem here since in the test bench we provide planty of time between > assigning values. > Does any one out there have any similar problems? Any help will be > appreciated. > > tz -- Paulo //\\\\ | ~ ~ | ( O O ) __________________________________oOOo______( )_____oOOo_______ | . | | / 7\'7 Paulo Dutra (paulo@xilinx.com) | | \ \ ` Xilinx hotline@xilinx.com | | / / 2100 Logic Drive (800) 255-7778 | | \_\/.\ San Jose, California 95124-3450 USA | | Oooo | |________________________________________oooO______( )_________| ( ) ) / \ ( (_/ \_)Article: 21683
Note: cross-posted to sci.space.tech from comp.arch.fpga. Craig Taniguchi wrote: > Remember that not only the SEU events are critical but the total dose is very critical. > Usually the total dose is very low on fpgas which dictates that the life of the part may > be very short. Hi Craig, I would not fully agree with this statement based on published data sets. In some cases, yes, the total dose performance of FPGAs is quite low and without extra shielding, their lifetime could be quite short. There are a number of products out there, commercial/military devices, that are very total dose soft. For example, complete failure at approximately 2.2 krad(Si) has been seen. For many missions this would translate into a short lifetime. On the other hand, commercial/military grade devices in modern technology (3.3V or lower, for example) have typically shown solid "radiation-tolerant" performance levels or even radiation-hard levels(*). In fact, going through the published data sets, one finds, for example, that products from Quicklogic (QL3000 series), Xilinx XQR4000XL, and Actel (A54SX, RT54SX, and A54SX-A) all do better than 30 krad(Si), for the devices tested. In particular, the XQR4000XL series is specified at 60 krad(Si). That is fine for many civilian space missions without any extra shielding. (**) Actel RT54SX devices can go from 50 to over 100 krad(Si) in capability and some runs on prototype A54SX-A series devices have shown better than 200 krad(Si) performance levels. The Actel RH series devices are specified at 300 krad(Si). This range of performance levels is, in my opinion, adequate for the vast majority of civilian space (i.e., commercial and space science) missions currently being flown. (***) Obviously not all parts are good for all missions. Note that many LEO missions do just fine with parts that are hard to 10 krad(Si). (*) Here I'm using the following definitions for total dose, all numbers in krad(Si) - and yes, that is not an SI unit! radiation-soft: < 20 radiation-tolerant: > 20 and < 100 radiation-hard: > 100 (**) As always, there are now absolutes and your mileage may vary. Orbit, solar cycle, solar flares, manufacturing lot, bias levels, and position in spacecraft can all affect total dose performance. Lot-to-lot variation for devices fabricated in a commercial foundry in particular is an important variable (that is, one must verify by test). (***) One notable exception and a difficult mission is when one goes to Europa. Here are some numbers I'm pulling from a paper at MAPLD 1999. Shielding ========= Outside of spacecraft: Several Grad(Si) 100 mils of Al 4 Mrad(Si) 450 mils of Al 667 krad(Si) 1.5 in of Al 30 krad(Si) 500 mils of Tungsten 30 krad(Si) Just my $0.02, Have a good evening, ---------------------------------------------------------------------- rk The Soviets no longer were a threat stellar engineering, ltd. in space, and in the terms that stellare@erols.com.NOSPAM became commonplace among the veteran Hi-Rel Digital Systems Design ground crews, as well as the astronauts, the dreamers and builders were replaced by a new wave of NASA teams, bureaucrats who swayed with the political winds, sadly short of dreams, drive, and determination to keep forging outward beyond earth. -- Shepard and Slayton.Article: 21684
Hi, I'm looking for people to send me their placed and routed Virtex bitstreams (Intel Hex format preferred) so that I can experiment with different compression algorithms. At Fujitsu Australia we generally download our FPGAs from a host processor using one of the slave modes. To save space, we often use a simple run length encoding, but this doesn't give a very good compression ratio for the more tightly packed designs. This will become more important for us as we are about to start using Virtex parts, which have huge bitstreams. I would like to select a better compression algorithm, but I need to know how the different compressions algorithms will work with "real world" designs. I will publish the results (compression ratio vs compression algorithm vs %utilisation of the part vs part number) in this newsgroup when I have some answers. The more examples you send me, the more significant the results will be. Be sure to include the part number and the reported part utilisation with the hex file. And don't worry, due to the secret nature of the Xilinx bitstreams, I have no practical way of working out what your design does. (And if anyone wants to send 4000 or Spartan or Spartan2 files, they will also be used in the study.) Thanks in advance, Allan. P.S. Remove the .hates.spam part from my email address. (I *am* speaking for the company this time!) -- Allan Herriman Senior Design Engineer mailto:allan.herriman.hates.spam@fujitsu.com.au Fujitsu Australia Tel: (+61 3) 9845 4341 5 Lakeside Drive, Burwood East Fax: (+61 3) 9845 4572 Victoria, 3151, AUSTRALIA "Thirty five dollars and a six pack to my name."Article: 21685
On Wed, 29 Mar 2000 00:44:09, Ray Andraka <randraka@ids.net> wrote: > Well, no. It's application is not determined, but it's function is. It is a rather > complex state machine that executes a limited number of different instructions, generally > in a serial fashion. The hardware is what it is and you can't change it no matter what > you do to the processor (well, putting power on it backwards to change it into a lump of > epoxy encapsulated glass doesn't count). As a result, the permutations of how it gets > used in a system are pretty well limited. The timing of the signals, the functions of the > pins etc all remain pretty much the same (some pins may be programmable for more than one > function, but you can't move a pin to another location etc). All of this constrains the > range of applications much more so than what you have with an FPGA. It also allows the > vendor to get away with a relatively small number of applications notes discussing how it > gets used in a system. An error in the microprocessor instruction stream generally will > not damage the processor, and each instruction is more or less stand-alone. An FPGA > bitstream sequence very is very tightly coupled to itself, in that 1) A wrong bitstream > can burn up the device in many families, 2) the bitstream as a whole has to be internally > consistent to make the connections between logic blocks. I really hadn't considered this but yes, the Xilinx and Synplicity tools do warn (actually more like stop dead) of impending doom when two drivers are on the same net when they could be activated at the same time. This is an excellent argument in favor of keeping the bitstream proprietary. Good grief I have enough problems trying to get the design done without worrying about the details of the bitstream. This is the least of my worries! At the top of the list, as of today, is availability. I was "promised" 600Es (FG680s) about now, but instead got a note from the distributor indicating another 16-24 weeks, depending on speed grade. Gack! I'm dead. Without bad luck I'd certainly have none! (I started this project with DynaChip). ---- KeithArticle: 21686
On Tue, 28 Mar 2000 02:50:21, rk <stellare@nospam.erols.com> wrote: > Greg Neff wrote: > > > > how do you make sure that the voting circuits for 2 out 3 > > > work 100% of the time? > > (snip) > > BTW, diversity has not been used in systems that I have seen. The > > argument for diversity is that it compensates for latent failure modes, > > such as software bugs. The argument against diversity is that it is > > more practical to design and thoroughly V&V one system, than to design > > and V&V three diverse systems that have to work together in a redundant > > configuration. > > The one example of a system with diversity that I am aware of is the Space > Shuttle's main computer system. It consists of 5 computers, with identical > hardware. The software, however, is identical on the 4 computers that > actually do the work. A fifth computer, running but not controlling the > vehicle unless commanded to, runs software developed by a completely > independent team. This is correct. Note the astronauts have also said that they would never pull the switch to go to the redundant software (it is a manual override). They trust the Shuttle OBS and have never tried the other. > > Anyone else know of any other examples? Other than multiple parallel, but identical, TMR circuits in a crypto processor I don't know of any. It's incredibly expensive to duplicate everything, especially the intellectual part. Then, when you get an error after years of fault-free operation, do you want to trust a newbie? When do you make that determination? ..likely when things have gone so wrong it's too late. I believe it's better to throw that money and tallent at making sure the original problem is solved. I believe the Shuttle OBS is evidence of this, both ways. ---- KeithArticle: 21687
On Tue, 28 Mar 2000 21:08:05, Magnus Homann <d0asta@mis.dtek.chalmers.se> wrote: > rk <stellare@nospam.erols.com> writes: > > > Greg Neff wrote: > > > > > > how do you make sure that the voting circuits for 2 out 3 > > > > work 100% of the time? > > > (snip) > > > BTW, diversity has not been used in systems that I have seen. The > > > argument for diversity is that it compensates for latent failure modes, > > > such as software bugs. The argument against diversity is that it is > > > more practical to design and thoroughly V&V one system, than to design > > > and V&V three diverse systems that have to work together in a redundant > > > configuration. > > > > The one example of a system with diversity that I am aware of is the Space > > Shuttle's main computer system. It consists of 5 computers, with identical > > hardware. The software, however, is identical on the 4 computers that > > actually do the work. A fifth computer, running but not controlling the > > vehicle unless commanded to, runs software developed by a completely > > independent team. > > > > Anyone else know of any other examples? > > [Rumours] > Wasn't the Ariane 5 suppsoed to have different SW for some controlling > functions, but they run out of time, and didn't implement it. The > results are known. > [End Rumours] No. It was a management screw-up (much like the Hubble's first "eyes"). They adoped the Ariane4's flight software and didn't test-test-test. ...too expensive. For the skinney see (among other places): http://www.cs.purdue.edu/homes/palsberg/ariane5rep.html ---- KeithArticle: 21688
I don't know of a formal study if that's what you are asking. I do know that, even with Virtex, you can get performance gains of better than 50% with floorplanning regardless of how the design was entered. As the density increases, the benefit from floorplanning also increases in terms of both area and performance. With good coding style and a decent synthesizer, you can get pretty good logic with care RTL level coding. Coding at the structural level can sometimes get you better implementation, for example if you are doing something unusual with the carry chain, but for the most part coding the RTL level will do OK. That said, one valid reason for structural coding is to put placement constraints in the code. An RTL level design won't let you do the constraints in the design. You can still put constraints in through a constraints file or in the graphical floorplanner, but you lose the advantage of a hierarchical design (ie you need to individually floorplan each instance of a circuit even if it is in the design many times). For logic that will be instantiated multiple times or that I will (or someone else will) be using in another design, I will often put the placement in structural VHDL. In instances where it is just one or two intances, I'll do it inthe floorplanner becasue it is faster and the code is easier to read. The other time I use structural level coding is when I just can't get the synthesis tools to turn out the implementation I want - previously that included RAMs. Now about the only time I run into that is when using the carry chain in unorthodox ways (ie. the synthesizer pretty much only uses the carry if it recognizes an add/subtract or increment/decrement, and sometimes it is pretty stubborn about putting logic on the wrong side of the carry chain which makes two levels of logic). George wrote: > Does anybody know about any decent quantitative comparison between using > VHDL at RTL level opposed to the use of structural VHDL with proper > floorpalanning, for targeting FPGA (Speed, area)?? > > Cheers. -- -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email randraka@ids.net http://users.ids.net/~randrakaArticle: 21689
Keith R. Williams wrote: > > The one example of a system with diversity that I am aware of is the Space > > Shuttle's main computer system. It consists of 5 computers, with identical > > hardware. The software, however, is identical on the 4 computers that > > actually do the work. A fifth computer, running but not controlling the > > vehicle unless commanded to, runs software developed by a completely > > independent team. > > This is correct. Note the astronauts have also said that they > would never pull the switch to go to the redundant software (it > is a manual override). They trust the Shuttle OBS and have never > tried the other. > > > > Anyone else know of any other examples? > > Other than multiple parallel, but identical, TMR circuits in a > crypto processor I don't know of any. It's incredibly expensive > to duplicate everything, especially the intellectual part. Then, > when you get an error after years of fault-free operation, do you > want to trust a newbie? When do you make that determination? > ..likely when things have gone so wrong it's too late. I > believe it's better to throw that money and tallent at making > sure the original problem is solved. I believe the Shuttle OBS > is evidence of this, both ways. Here's an excerpt from: The Space Shuttle Primary Computer System A. Spector and D. Gifford Communications of the ACM, September, 1984, p. 874 which is "interesting" ... AS. Could you describe a training scenario on the SMS that caused a problem for you? Clemons. Yes - it was a "bad-news - good-news" situation. In 1981, just before STS-2 was scheduled to take off, some fuel spilled on the vehicle and a number of tiles fell off. The mission was therefore delayed for a month or so. there wasn't much to do at the Cape, so the crew came back to Houston to put in more time on the SMS. One of the abort simulations they chose to test is called a "TransAtlantic abort," which supposes that the crew can neither return to the launch site nor go into orbit. The objective is to land in Spain after dumping some fuel. The crew was about to go into this dump sequence when all four of our flight computer machines locked up and went "catatonic." Had this been the real thing, the Shuttle would probably have had difficulty landing. This kind of scenario could only occur under a very specific and unlikely combination of physical and aerodynamic conditions; but there it was: Our machines stopped. Our greatest fear had materialized - a generic software problem. We went off to look at the problem. The crew was rather upset, and they went off to lunch. AS. And contemplated their future on the next mission? Clemons. We contemplated our future too. We analyzed the dump and determined what had happened. Some software in all four machines had simultaneously branched off into a place where there wasn't any code to branch off into. This resulted in a short loop in the operating system that was trying to field and to service repeated interrupts. No applications were being run. All the displays got a big X across them indicating that there were not being serviced. rkArticle: 21690
A new version of the "academic" VPR and T-VPack packing, placement and routing CAD tool set for research is available on my web site. This latest version includes the enhancements Alexander (Sandy) Marquardt made to VPR during his M. S. degree -- timing-driven logic block packing and timing-driven placement. It can be freely used for non-commercial research, and can be downloaded from: http://www.eecg.toronto.edu/~vaughn/vpr/vpr.html Just for clarity, this code is the "academic" (i.e. code written at the Univerity of Toronto during various students' grad degrees) VPR -- it is not the code for the more recent, commercial version of VPR. Vaughn BetzArticle: 21691
Its even worse than that. Your reference is to an easy to make mistake of multiple drivers on a line, which the tools will prevent. When you crawl inside the bitstream it is very easy to make more than one signal an output into a route. For example, I am acutely aware of one FPGA that gets programmed with "let the magic smoke out" if you load it with a bitstream with all the bits set to one. You program enough misconnects at once, and you have less than a fraction of a second before your part does its China Syndrome imitation. Not many devices are immune either! "Keith R. Williams" wrote: > On Wed, 29 Mar 2000 00:44:09, Ray Andraka <randraka@ids.net> > wrote: > > > Well, no. It's application is not determined, but it's function is. It is a rather > > complex state machine that executes a limited number of different instructions, generally > > in a serial fashion. The hardware is what it is and you can't change it no matter what > > you do to the processor (well, putting power on it backwards to change it into a lump of > > epoxy encapsulated glass doesn't count). As a result, the permutations of how it gets > > used in a system are pretty well limited. The timing of the signals, the functions of the > > pins etc all remain pretty much the same (some pins may be programmable for more than one > > function, but you can't move a pin to another location etc). All of this constrains the > > range of applications much more so than what you have with an FPGA. It also allows the > > vendor to get away with a relatively small number of applications notes discussing how it > > gets used in a system. An error in the microprocessor instruction stream generally will > > not damage the processor, and each instruction is more or less stand-alone. An FPGA > > bitstream sequence very is very tightly coupled to itself, in that 1) A wrong bitstream > > can burn up the device in many families, 2) the bitstream as a whole has to be internally > > consistent to make the connections between logic blocks. > > I really hadn't considered this but yes, the Xilinx and > Synplicity tools do warn (actually more like stop dead) of > impending doom when two drivers are on the same net when they > could be activated at the same time. This is an excellent > argument in favor of keeping the bitstream proprietary. > > Good grief I have enough problems trying to get the design done > without worrying about the details of the bitstream. This is the > least of my worries! At the top of the list, as of today, is > availability. I was "promised" 600Es (FG680s) about now, but > instead got a note from the distributor indicating another 16-24 > weeks, depending on speed grade. Gack! I'm dead. Without bad > luck I'd certainly have none! (I started this project with > DynaChip). > > ---- > Keith -- -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email randraka@ids.net http://users.ids.net/~randrakaArticle: 21692
hi thanks, that ( the technotes7323) really help to remove need of the Global clock buffer. i have got another workaround if nothing can be done to the synthesizer. explicity instantiate a BUFG (not IBUFG) in you design, then during synthesis, it will still come out as using a Global cLock buffer. however during implementation , it will no insist that input to the Global Clock buffer come from the Clock pad, any I/O pad will do. just that you still use up one Global clock buffer. ( thanks to Bill from alpha data) Thanks to all that have response. spyng In article <8bri5e$3oi$1@lyra.eoa.telcom.oki.co.jp>, =?iso-2022-jp?B?GyRCMEIwZhsoQiAbJEI3chsoQg==?= <yasui149@oki.co.jp> wrote: > Hi, > > Let's see next URL.. This is workaround. > > http://support.xilinx.com/techdocs/7323.htm > > Set the default Global Buffer to DONT_USE. > > <spyng@my-deja.com> wrote in message news:8bp2hn$eaq$1@nnrp1.deja.com... > > hi, > > did it work?! where do you set it ? > > > > I have try to set the same thing (don't use) in the GUI constraint > > editor for the FPGA express, but when the design is optimze , it is map > > to a BUFGP. > > than when Translate will warning and map with error. > > mypin loc constraint is set in the *.ucf file. > > > > I will try it again, any other special thing that you did? Thanks > > Sent via Deja.com http://www.deja.com/ Before you buy.Article: 21693
In article <pLMYl5dhX7hK-pn2-XheSdoZUGmYc@localhost>, Keith R. Williams wrote: >On Wed, 29 Mar 2000 00:44:09, Ray Andraka <randraka@ids.net> >wrote: > >> Well, no. It's application is not determined, but it's function is. It is a rather >> complex state machine that executes a limited number of different instructions, generally >> in a serial fashion. The hardware is what it is and you can't change it no matter what >> you do to the processor (well, putting power on it backwards to change it into a lump of >> epoxy encapsulated glass doesn't count). As a result, the permutations of how it gets >> used in a system are pretty well limited. The timing of the signals, the functions of the >> pins etc all remain pretty much the same (some pins may be programmable for more than one >> function, but you can't move a pin to another location etc). All of this constrains the >> range of applications much more so than what you have with an FPGA. It also allows the >> vendor to get away with a relatively small number of applications notes discussing how it >> gets used in a system. An error in the microprocessor instruction stream generally will >> not damage the processor, and each instruction is more or less stand-alone. An FPGA >> bitstream sequence very is very tightly coupled to itself, in that 1) A wrong bitstream >> can burn up the device in many families, 2) the bitstream as a whole has to be internally >> consistent to make the connections between logic blocks. > >I really hadn't considered this but yes, the Xilinx and >Synplicity tools do warn (actually more like stop dead) of >impending doom when two drivers are on the same net when they >could be activated at the same time. This is an excellent >argument in favor of keeping the bitstream proprietary. Let me sumarize this to make sure I get it straight: Because you don't feel like spending the effort to write the stream directly or write software to check the bitstream's validity (perhaps as writing it), they shouldn't release the bitstream specs for anyone to use?Article: 21694
Ray Andraka wrote: > Well, no. It's application is not determined, but it's function is. It is a rather > complex state machine that executes a limited number of different instructions, generally > in a serial fashion. The hardware is what it is and you can't change it no matter what > you do to the processor (well, putting power on it backwards to change it into a lump of > epoxy encapsulated glass doesn't count). As a result, the permutations of how it gets > used in a system are pretty well limited. The timing of the signals, the functions of the > pins etc all remain pretty much the same (some pins may be programmable for more than one > function, but you can't move a pin to another location etc). All of this constrains the > range of applications much more so than what you have with an FPGA. It also allows the > vendor to get away with a relatively small number of applications notes discussing how it > gets used in a system. An error in the microprocessor instruction stream generally will > not damage the processor, and each instruction is more or less stand-alone. An FPGA > bitstream sequence very is very tightly coupled to itself, in that 1) A wrong bitstream > can burn up the device in many families, 2) the bitstream as a whole has to be internally > consistent to make the connections between logic blocks. Your point summarizes to, ==FPGAs are more complex because you are programming hardware and hardware is more complex than software== I disagree with this concept. The pins on the chip may not move and the timing in relation to the clock may not change, but there are very significant issues in how to connect a micro to your circuit in terms of "if I use this pin for that and that pin for this, then I can only use this other pin for this and not that". The higher level timing, which is what you will be coding in software which is what we are talking about, can be just as complex as the low level timing, if not more so. Then on top of all that, there are very significant issues in software design that you will never see in hardware. Many processors have special instructions for semaphores and other interlocked operations which almost never occur in hardware designs, and if they do are rather trivial to deal with since you can design the hardware to do just what you want. And a micro may not selfdestruct just because you misprogram it, but you can certainly do damage with your circuit it is in. This is one of those cases where it can be VERY important to control the timing of your code. Just ask the designers of an inkjet printer! I wonder how many print heads were destroyed before they got the first unit up and running? It would also be a very simple task to write a program to verify that a bitstream won't do damage to a chip. This could be provided along with the full description of the bitstream format to protect the engineer from himself ;) We can argue points of complexity for a long time, but I doubt that you can make a supportable argument that there is something inherently more difficult about designing hardware over designing software for a micro. But even if FPGAs are more complex to program than a micro, is that really a reason to prevent an engineer from having the information to use a chip the way he wants? -- Rick Collins rick.collins@XYarius.com remove the XY to email me. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design Arius 4 King Ave Frederick, MD 21701-3110 301-682-7772 Voice 301-682-7666 FAX Internet URL http://www.arius.comArticle: 21695
Hi all, I'm writing an interface to the internal LPM RAM in Altera Flex 10ke (LPM_RAM_IO) using Synplify 5.3.1. But when I compiled, I got the following warning message: @W:"d:\....\subctrl.vhd":19:2:19:9|Port has mixed directions of bits: RAM_Data[15:0]. EDIF file will have this bus port expanded. some minor changes (move the code from (1) to (2) or (3) ) to the code will change the error message to: @E:"d:\...\subctrl.vhd":129:2:129:3|Storing a 'Z' value to an 'inout' port for more than 1 clock cycle is not yet supported. Use an 'out' for net ram_data_5(15 downto 0), if possible. I have been struggling with this for quite some time. Is there a good way for coding bidirection signals so that there is no logic contention? When to drive the signal & when to tristate it? I think my code is very messy & very inefficient. I'm quite new to VHDL & logic design. If anyone has any better way of coding, pls teach me... Thanks for your time! btw, the ram clock is triggered at rising clock edge & the controller at falling edge. Clock is running at 33.8MHZ. Would there be any timing problem? if you need any further clarification, pls ask. Thank you very much. I welcome any advice!! Regards MK remove "REMOVE" to reply. [ input, sent by another process ] --> [ controller ] --> [ output, gated by ext clock ] ^^ |||| vvv [ Altera LPM RAM ] ram_addr = address bus to ram ram_data = inout bidirectional data bus to ram wrten = write enable signal to ram outputen = read signal to ram wrtflag = a write flag from other process, a request to write data to ram rdflag = a read flag from other process, a request to read data from ram rdhold = a read requires 2 cycles. 1st cycle to put address, 2nd cycle to read back. this rdhold is to delay the wrtflag so that there is no conflict. **************** RAMInterface: PROCESS(nReset, MClk, WrtFlag, RdFlag) BEGIN IF nReset='0' THEN RAM_Addr<= (OTHERS=>'0'); RAM_Data<= (OTHERS=>'0'); IntSubc_Data<= (OTHERS=>'0'); WrtEn <= '0'; OutputEn <= '0'; RdDone <= '0'; RdHold <= '0'; WrtHold <='0'; ELSIF (MClk'event AND MClk='0') THEN RdDone <= '0'; WrtHold <= '0'; RAM_Data <= (OTHERS=>'Z'); --- (1) IF WrtFlag='1' AND RdHold='1' THEN WrtHold <= '1'; ELSIF WrtFlag='1' THEN RAM_Addr <= CONV_STD_LOGIC_VECTOR(IntRAM_Addr,6); RAM_Data <= IntRAM_Data; WrtEn <= '1'; OutputEn <='0'; ELSIF RdFlag ='1' THEN -- RAM_Data <= (OTHERS=>'Z'); -- (2) IF RdHold ='0' THEN -- To count one clock cycle b4 reading data from ram RAM_Addr <= SubcRAM_Addr; -- RAM_Data <= (OTHERS=>'Z'); -- (3) WrtEn <='0'; OutputEn <='1'; RdHold <= '1'; RdDone <='0'; ELSE RdHold <= '0'; RdDone <= '1'; IntSubc_Data <= RAM_Data; END IF; END IF; END IF; END PROCESS RAMInterface;Article: 21696
Thanks a lot for suggestions. Actually in our case timing was a problem. We had in design assign bus = (rd == 1) ? data_out : 'hz; in test bench assign bus = (wr == 1 && rd == 0) ? data_in : 'hz; The "posedge wr" in the design file apparently comes before "wr == 1" in the test bench. We corrected the problem in test bench with: assign bus = (wr_del == 1 && rd == 0) ? data_in : 'hz; assign #10 wr = wr_del; Now everything works just fine. Thanks, tz Richard Iachetta wrote: > In article <MPG.134999387baccd2298970d@ausnews.austin.ibm.com>, > iachetta@us.ibm.com says... > > Another example: > > > > if (a == 1) > > bus = y; > > else if (b == 0) > > bus = z; > > else bus = 0; > > > > What if b=X whenever a = 1? The RTL will always evaluate to bus = y > > because b is a don't care when a = 1. But the gatelevel bus will most > > likely (depending upon which gates exactly create the logic) equal X. > > One more thing. You can get this kind of behavior with continuous > assignment statements also: > > assign y = a | (b & c & (state == 3'b010)); > > If state = 3'b0X0, you won't get an X on y unless a=0 and both b and c are > 1. Otherwise, y will produce the correct result in RTL sim. But in gate > level sim, you are much more likely to see y go to X when state = 3'b0X0; > > -- > Rich Iachetta > iachetta@us.ibm.com > I do not speak for IBM.Article: 21697
I had assumed the bit-streams had built in error detection to avoid bringing the device out of the configuration state until the bit stream had been verified. That should stop any 'major' violations in the design. But, it would n't stop things like massive fanout on drivers which would not be good for the chip ! A. "Ray Andraka" <randraka@ids.net> wrote in message news:38E17B8C.5F2B7880@ids.net... > Its even worse than that. Your reference is to an easy to make mistake of multiple drivers on a > line, which the tools will prevent. When you crawl inside the bitstream it is very easy to make > more than one signal an output into a route. For example, I am acutely aware of one FPGA that > gets programmed with "let the magic smoke out" if you load it with a bitstream with all the bits > set to one. You program enough misconnects at once, and you have less than a fraction of a > second before your part does its China Syndrome imitation. Not many devices are immune either! > -- > -Ray Andraka, P.E. > President, the Andraka Consulting Group, Inc. > 401/884-7930 Fax 401/884-7950 > email randraka@ids.net > http://users.ids.net/~randraka > >Article: 21698
Lets not start this again. Most of the group agree with you greg. If you want to do this there is not reason why you shouldn't be able to. But i don't think xilinx would be pleased if people started phoning support to report smoke from their chip when they have used their own bitstream generator. I know you have no intention of looking to xilinx for that sort of support so it doesn't affect you, but most people in the group are concerned with implementing designs in FPGA's and don't care in that level of detail about how the tools work. Personally i would be worried about writing the code to check the bitstream. Imagine the number of chips you could go through debugging that software!! A. "Greg Alexander" <galexand@sietch.bloomington.in.us> wrote in message news:8bs3fj$h9r$1@jetsam.uits.indiana.edu... > In article <pLMYl5dhX7hK-pn2-XheSdoZUGmYc@localhost>, Keith R. Williams wrote: > >On Wed, 29 Mar 2000 00:44:09, Ray Andraka <randraka@ids.net> > >wrote: > > > >> Well, no. It's application is not determined, but it's function is. It is a rather > >> complex state machine that executes a limited number of different instructions, generally > >> in a serial fashion. The hardware is what it is and you can't change it no matter what > >> you do to the processor (well, putting power on it backwards to change it into a lump of > >> epoxy encapsulated glass doesn't count). As a result, the permutations of how it gets > >> used in a system are pretty well limited. The timing of the signals, the functions of the > >> pins etc all remain pretty much the same (some pins may be programmable for more than one > >> function, but you can't move a pin to another location etc). All of this constrains the > >> range of applications much more so than what you have with an FPGA. It also allows the > >> vendor to get away with a relatively small number of applications notes discussing how it > >> gets used in a system. An error in the microprocessor instruction stream generally will > >> not damage the processor, and each instruction is more or less stand-alone. An FPGA > >> bitstream sequence very is very tightly coupled to itself, in that 1) A wrong bitstream > >> can burn up the device in many families, 2) the bitstream as a whole has to be internally > >> consistent to make the connections between logic blocks. > > > >I really hadn't considered this but yes, the Xilinx and > >Synplicity tools do warn (actually more like stop dead) of > >impending doom when two drivers are on the same net when they > >could be activated at the same time. This is an excellent > >argument in favor of keeping the bitstream proprietary. > > Let me sumarize this to make sure I get it straight: Because you don't feel > like spending the effort to write the stream directly or write software to > check the bitstream's validity (perhaps as writing it), they shouldn't > release the bitstream specs for anyone to use?Article: 21699
Hi all, I have some trouble using LUT components in VHDL in a Virtex design. The VHDL description is synthesized using Synopsys_1998_08 and Xilinx M1.5i. Here is what I (try to) do and what happens: I add LUT1 components to my design and configure them to be inverters by adding a 'set_attribute <instance name> "INIT" -type string "1" ' line to my Synopsys dc_shell script. So far everything is alright. When exchanging the LUT1 for a two-input LUT2 (I1 input tied to '0') and changing the INIT attribute in the dc_shell script to "5" (output is the inversion of input I0, don't care for I1) the Synopsys command "insert_pads" causes the following message (and termination of the command, of course): ' ERROR - The FPGA cell '<instance name>' does not contain the proper instance specific configuration information. (OPT-906)' What additional configuration information beside the INIT attribute is needed for this type of cell? The Xilinx library guide doesn't say anything about this (or at least not in a place I would search in for this info). Does anybody know how to use these LUT components in the Synopsys/Xilinx design flow? The reason for using LUT-components in VHDL is that I want to control their placement via RLOC constraints in the .ucf file. And here comes my second problem. For the LUT1 components (which, as mentioned above, Synopsys translated correctly) constraints like INST core/clocks/inv_lut* U_SET inv_line ; INST core/clocks/inv_lut_1 RLOC=CLB_R1C1 ; INST core/clocks/inv_lut_2 RLOC=CLB_R1C2 ; . . . caused the mapper to abort its work with the error messages (one for every RLOC) 'Bad format for RLOC attribute on LUT1 symbol "core/clocks/inv_lut_... " (output signal=core/clocks/invsig<..>) '. Whats wrong with this?? I used exactly the syntax given by the Xilinx documentation, even tried it with a "*" at the end of the RLOC constraint or specified a slice within the CLB, without success. Anybody out there can help me with either or both of my problems? TIA, Jens
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z