Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Marko wrote: > Traditionally, firmware was defined as software that resided in ROM. > So, my question is, what do you call FPGA code? Is "firmware" > appropriate? At my previous employer, a proposal was submitted and the Xilinx configuration files were called firmware. Someone forgot to read the RFP carefully, because we were specifically prohibited from developing firmware. The proposal was changed, I believe to 'configuration file'. --- Joe Samson Pixel VelocityArticle: 97326
Hal Murray wrote: > What's the spectrum of the output of a "two-modulus prescaler"? > > What's the spectrum of a DDS? > > How low do I have to make the PLL bandwidth if I want to filter > out the junk? (Assume I run the DDS output through a PLL > to filter out the steps.) The output of a two-modulus prescaler (specifically "fractional-N synthesis") and the MSbit of a DDS (as opposed to a phase-to-sine lookup feeding a DAC) are the same thing - they produce the same spectra. The largest spurs are related to the closest integer ratio approximations to the output to reference clock ratio, many smaller spurs coming from mixing of those components and other slightly nonlinear effects of the ratios that produce these beat frequencies. With no extra help, both approaches have phase steps in some situations that are too low to be filtered out in an analog PLL filter. Imagine an output frequency so close to a divide-by-12 that the divide-by-13 happens once a second. The result will be the same from the MSBit of the DDS and the accumulator for the divide by 12/13 control - a phase "slip" that has a ramp back to "steady state" controlled by the analog PLL filter. The phase steps in DDS or fractional-N synthesis can be reduced by using the same techniques in sigma-delta converters but the control is a little more than a dual modulus. If there is a phase offset from "ideal" that is a ramp going from -0.5 Unit Intervals (UI) phase error to +0.5 UI of error, adding compensation to the output divider to push the phase error occasionally beyond the +/- 0.5 UI range at high enough frequency will allow the PLL's loop filter to average the compensation to directly match the offest from "ideal." Different systems have different requirements for what is "good" quality results. Using the dithering just mentioned, there are often still noticeable spurs in the output spectrom though they won't look like much from the time domain. Many systems - such as analog - can't deal with the associated background tones and would gladly trade off a higher noise floor for lower spurs. In most of the stuff I want to deal with, I'm concerned about the time domain.Article: 97327
I played around in Quartus and I think I've discovered the answer to my own question. If you assign to a pin location "null", that removes the assignment apparently. ie. cmp add_asssignment top_name "" pin_name LOCATION null Cheers, ErnieArticle: 97328
>Mark McDougall wrote: >> >> FWIW, I'd never return to schematics. Yes, top level can be nice as Mike >> pointed out, but I find the disadvantages - slow, buggy tools, >> cumbersome maintenance, no portability - far outweigh the benefits. >> >> Regards, >> Mark Ironically, even as a 'born again' HDL'er, you just pointed out that the "benefits" of HDL over Schematics may not have had anything to do with actual benefits; but rather, may have been due simply to the abysmal fpga-schematic -tools- that we were all forced to deal with. Assuming -excellent- schematic tools, the benfits of graphical presentation will shine, as they must -always- do. This is simply because the human brain is -wired- to process graphical information; and graphical/visual presentation has a higher info-content than plain text. Think of graphical-info as "high level langauge" vs. a screen full of text as "assembly", and that'd be a semi-accurate analogy I believe. As I write this, I realize that there are -two- aspects of this issue; 1) Design is one (the OP's original subject) 2) but Presentation is a 2nd aspect Schematics have it all over HDL for 2), and as Ray pointed out, have distinct benefits in top-level design as well. I've forgotten who posted about having invested a lot of time in setting up an optimized Viewlogic setup; but his description of how it operated gave a glimmer of just what an -excellent- FPGA schematic front end tool would be like. Just think if it was easily -2 way- ! Where one was dealing with a major functional block, and it was faster and easier to understand visually, you could drop a box on the screen, label it, and open an HDL window to 'populate' it. And where it was quicker to just write 3 lines of HDL, you could start with that, and have the tool automatically plop a finished and labeled box on the screen for you. You'd be working in both high level visual -and- low level HDL code -simultaneously-. Now -that- would be nice... Regarding the complaints about drawing "hundreds" of wires coming off an FPGA, and having to draw all the caps on the FPGA....c'mon, those are both partially specious I think. In the first place, the vast majority of pins on the vast majority of FPGA designs are -bus- lines; which are virtually- always drawn as a single line representing 8,16, 32, or more, connections/wires. Draw just 4 or 5 of those, and you've finished 80% of the connections on a 160 qfp like a PCI intfc design. It ain't -that- tough... <g> In the 2nd place, the convention for bypass caps is to group them all on their own schematic; i.e a "power and ground" page. It is rare that I ever see them placed right next to chip symbols on any schematic these days. (one -does- want to remember to pass placement notes to the layout-man though! <grin>) With most modern schem/pcb systems, a dozen bypass caps can be instantiated onto the sheet, and connected to GND and 3.3V, in about 60 seconds. In terms of 'readability'...let's all be honest now. Only the most -expert- of HDL designers can look at a multi-page listing of dense code and have -any- understanding of what the hell an FPGA -does-, in any short time frame (i.e. seconds or minutes). The rest of us, even most HDL designers, will spend MANY minutes, even hours, trying to figure out what the heck the part -does-; let alone how it works. Yet -everyone- could look at a schematic of that chip and gain at least a glimpse of the overall functionality in mere -seconds-; and a pretty in-depth understanding in perhaps 2 minutes. For all the rest of the engineers and technicians on the entire planet, a schematic of the inside of that FPGA is an -invaluable- aid in troubleshooting and repairing the equipment that it's a part of. I have spent plenty of years on both ends of the stick, design and troubleshooting, both my own designs and those of many others; and that's how I've learned to value both the code AND the high-level visual presentation. You will notice that even in the pure software world, higher-level tools like flowcharters and "code block layout" tools retain a place of importance. Again, graphical/visual presentation is simply more -efficient-....it carries more information per visual/brain "bit", so to speak. A single 8x11" schematic page can -easily- represent -many- pages of HDL code....and even represent that HDL -and- 10,000 lines of software combined. (i.e. one little rectangle labeled "DSP core for FFT's", etc. etc..). This is an extremely valuable design-method, and presentation-type, that we shouldn't be letting escape from our toolboxes so blithely. I thank the posters who talked about or asked about what tools are out there; for both the OP's entirely valid complaint on the current offerings; and for the other category of tools to -extract- readable schems from HDL code. I'm a firm believer in HDL as well, and would love to have a toolset for automated schema production from code; but in no way does plain text replace visual/graphical display of information, imho. ----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----Article: 97329
Symon wrote: > I've found > myself doing a fair bit of maintenance on some viewlogic stuff over the last > month or so, and, for readability, I've found myself doing the same as I do > for VHDL, i.e. drawing timing diagrams to work things out. > It's got to the point where I've looked (unsuccessfully) for tools to do > some PCB design entry using VHDL. I'm getting sick and tired of drawing a > bunch of rectangles with hundreds of wires coming off them to represent an > FPGA. Also, adding each and every bypass cap. If I could use VHDL for this, > I could cut and paste a lot of the pin information from the data sheet to be > more accurate and quicker. Of course, for analog electronics this makes > little sense, but it would be nice to be able to mix and match the two entry > methods. I hate to admit it sometimes, but for several recient fpga boards I've just skipped doing schematics for them entirely, choosing instead to design with scripts, augmented parts libraries, and netlists. It's much faster to verify a board with netlists when parts have reasonable signal names on library pins, and using script driven netlisting with reverse annotation from the layout to transfer net names to the fpga pins. Visually scanning netlists for 1517 ball bga's is MUCH quicker than following that many lines on a schematic and verifying pin/ball numbers. It would be very nice to document a design if there were tools that would take the same finished netlist and pcb layout, and autogenerate schematics --- except for the fact that thousand pin bga's just don't fit well on schematics anyway using 1960's drawing standards.Article: 97330
The background for my question whether an FPGA is viewed as software or as hardware comes from the regulations for medical devices. From the perspective of regulatory requirements it makes a difference whether an FPGA is viewed as software or as purely hardware because of regulatory requirements for medical devices are more comprehensive when software is involved. Regulatory requirements for medical devices are focussed on safety and reliability of the finished medical device. If the resulting product cannot be tested in full - which is the case with software - then the regulations require to have controlled processes in place for (software) product development in order to minimize risks on hazardous situations. Is anybody in this NG experienced in the field of applying FPGA's in medical devices and the view of regulatory bodies? www.hvdb.tkArticle: 97331
Hub van de Bergh wrote: > The background for my question whether an FPGA is viewed as software or as > hardware comes from the regulations for medical devices. > > From the perspective of regulatory requirements it makes a difference > whether an FPGA is viewed as software or as purely hardware because of > regulatory requirements for medical devices are more comprehensive when > software is involved. Looks like a duck, swims like a duck, quacks like a duck, it must be a duck??? In the end, if it looks like software (HDL or HLL), acts like software (loads from some read/write storage media such as eeprom, flash, disk, network), is readily updated/changed like software, there probably is very good reasons for regulators (or litigating lawyers) to suggest that design procedures would have avoided the loss of life or injury if the development had been held to the proper, and more rigourous standards of software. It's really hard to claim due diligence after the fact when you choose to take the easy way out by splitting hairs with definitions. Reconfigurable computing and system-on-chip design strategies blur the line today, and will probably solidly move that line so that FPGA design is considered software tomarrow. And for good reason. ***HARD*** ware designs can not easily/cost effectively be changed in the field, and as such everyone puts more effort into getting it right first time for that reason. ***SOFT*** ware has a reputation of people taking shortcuts because it can be changed next release, and get better (or right, or perfect) over time. I'm almost certain someone can find a written memo from some company to ship a hardware design in FPGA with known critical flaws to meet contractual shipment obligations or sales quotas, with the expectation that it can be corrected with an updated FPGA image next week, month, year. One such memo, in litigation, clearly defines HARD and SOFT when it comes to product failures and field upgradability.Article: 97332
richard wrote: > So, are any of you guys seeing the same sorts of problems? The last great schematics bundled with Xilinx software is ver 2.1i. The schematics is very intuitive and very easy to use. The schematic software that is bundled with Xilinx 3.1i and later is very clumsy. The diagram looks very ugly too. I have never used Schematics anymore after they released ver 3.1i > Doesn't it > make sense to have a half-hour review of a schematic rather than a > week-long review of a ream of HDL listings? It does make sense for one to learn schematic first before learn HDLs. Schematic forces you to think in term of hardware. In my opinion, it is not a good idea for one to learn HDL directly without having any exposure to shematics or get their hands dirty with TTL chips. People who never learn schematic usually treat HDL just like C. When they do that, their codes either do not compile or compile but the result is totally different from their expectation. HDL has a great advantage over schematic but one should know the schematic representation of their HDL codes. HendraArticle: 97333
Adam Megacz wrote: > Might be OT, but I'm interested in any pointers to the current > state-of-the-art in automatic schematic generation. By this I mean: I > give it a netlist, it draws what it thinks is a "pretty" schematic for > some heuristic definition of "pretty". Even if "pretty" means "just > barely not hideously ugly". > > In the area of mathematical graphs (spline edges, no 90-degree > requirements -- a very different problem) GraphViz is sort of the > "reference point" from which everybody else makes improvements: > > http://www.graphviz.org/ > > ... I'm looking for some similar analogous feature-point for > schematics rather than general directed graphs. And yes, I know this > will never be as good as human-drawn schematics. > > - a > A trip to the DAC website and look at the software vendors of EDA component parts that Synopsys, Synplicity, and others buy from. There was/is a company in Germany that made these netlist => schematics last time I went and I chatted with the developer about precisely these concerns in this thread. He was surprised at my comments, they only seem to talk with their immediate customer and not final end users so don't get real feedback. He asked me for examples of bad schematics that could be drawn better and what the heuristics might be, but I never took him up on it. John JaksonArticle: 97334
Hi Metal, Thanks for a good thought provoking post. I added a few 'in my humble opinion' thoughts to it! I'd be interested in what you think of them! "metal" <nospam@spaam.edu> wrote in message news:sqjkv19edad40j2oi3kfqj2c5lvvcnso1f@4ax.com... > >Mark McDougall wrote: >>> >>> FWIW, I'd never return to schematics. Yes, top level can be nice as Mike >>> pointed out, but I find the disadvantages - slow, buggy tools, >>> cumbersome maintenance, no portability - far outweigh the benefits. >>> >>> Regards, >>> Mark > > > Ironically, even as a 'born again' HDL'er, you just pointed out that > the "benefits" of HDL over Schematics may not have had anything to do > with actual benefits; but rather, may have been due simply to the > abysmal fpga-schematic -tools- that we were all forced to deal with. > > Assuming -excellent- schematic tools, the benfits of graphical > presentation will shine, as they must -always- do. This is simply > because the human brain is -wired- to process graphical information; > and graphical/visual presentation has a higher info-content than plain > text. > > Think of graphical-info as "high level langauge" vs. a screen full of > text as "assembly", and that'd be a semi-accurate analogy I believe. > OK, so this 'pictures are better' idea is not necessarily so. Cro-magnon man was great at cave painting. However, as someone once nearly said, I didn't get where I am today by looking at pictures. It took the printing press to move mankind to the level of technology we're at now. > > As I write this, I realize that there are -two- aspects of this issue; > > 1) Design is one (the OP's original subject) > > 2) but Presentation is a 2nd aspect > > > Schematics have it all over HDL for 2), and as Ray pointed out, have > distinct benefits in top-level design as well. > How many textbooks are on your shelf? How many have a list of chapters in text, and how many have a picture representing the contents of each section? Who says top-level design should be in schematics? > > > I've forgotten who posted about having invested a lot of time in > setting up an optimized Viewlogic setup; but his description of how it > operated gave a glimmer of just what an -excellent- FPGA schematic > front end tool would be like. Just think if it was easily -2 way- ! > > Where one was dealing with a major functional block, and it was faster > and easier to understand visually, you could drop a box on the screen, > label it, and open an HDL window to 'populate' it. And where it was > quicker to just write 3 lines of HDL, you could start with that, and > have the tool automatically plop a finished and labeled box on the > screen for you. You'd be working in both high level visual -and- low > level HDL code -simultaneously-. Now -that- would be nice... > > Regarding the complaints about drawing "hundreds" of wires coming off > an FPGA, and having to draw all the caps on the FPGA....c'mon, those > are both partially specious I think. > > In the first place, the vast majority of pins on the vast majority of > FPGA designs are -bus- lines; which are virtually- always drawn as a > single line representing 8,16, 32, or more, connections/wires. Draw > just 4 or 5 of those, and you've finished 80% of the connections on a > 160 qfp like a PCI intfc design. It ain't -that- tough... <g> > Yep, in the old days of PQ160s it was easy enough. Now that there are 1000's of pins, it's a right royal PITA! It's true you may have busses connecting to the FPGA. The trouble is, you need to connect the bus to the FPGA in a manner that reduces the number of layers of PCB you need. If you just whack the bus on the side of the schematic symbol, the poor layout guy will spend weeks adding layers, vias and God-knows-what trying to route your design. Use a picture of the BGA balls (NB. I'm not totally anti-picture!) and work out the best pin connections for the bus, and type them into a text file. > > In the 2nd place, the convention for bypass caps is to group them all > on their own schematic; i.e a "power and ground" page. It is rare > that I ever see them placed right next to chip symbols on any > schematic these days. (one -does- want to remember to pass placement > notes to the layout-man though! <grin>) > > With most modern schem/pcb systems, a dozen bypass caps can be > instantiated onto the sheet, and connected to GND and 3.3V, in about > 60 seconds. > That's not nearly enough caps. ;-) C'mon, how long does it take to type "12 Vccint 100nF Caps"? Less than a minute, and you understand it! > > In terms of 'readability'...let's all be honest now. Only the most > -expert- of HDL designers can look at a multi-page listing of dense > code and have -any- understanding of what the hell an FPGA -does-, in > any short time frame (i.e. seconds or minutes). > No. For both methods, HDL or schematic, the key to understanding something you've just come across, is to read the comments. In text. The signal names should be meaningful. Text, text text!! > > The rest of us, even most HDL designers, will spend MANY minutes, even > hours, trying to figure out what the heck the part -does-; let alone > how it works. > > Yet -everyone- could look at a schematic of that chip and gain at > least a glimpse of the overall functionality in mere -seconds-; and a > pretty in-depth understanding in perhaps 2 minutes. > > For all the rest of the engineers and technicians on the entire > planet, a schematic of the inside of that FPGA is an -invaluable- aid > in troubleshooting and repairing the equipment that it's a part of. > > I have spent plenty of years on both ends of the stick, design and > troubleshooting, both my own designs and those of many others; and > that's how I've learned to value both the code AND the high-level > visual presentation. > > You will notice that even in the pure software world, higher-level > tools like flowcharters and "code block layout" tools retain a place > of importance. Again, graphical/visual presentation is simply more > -efficient-....it carries more information per visual/brain "bit", so > to speak. > > A single 8x11" schematic page can -easily- represent -many- pages of > HDL code....and even represent that HDL -and- 10,000 lines of software > combined. (i.e. one little rectangle labeled "DSP core for FFT's", > etc. etc..). This is an extremely valuable design-method, and > presentation-type, that we shouldn't be letting escape from our > toolboxes so blithely. > ENTITY DSP_core_for_FFTs IS PORT( CLOCK : in std_logic; DATA_IN : in std_logic_vector(127 downto 0); DATA_OUT : out std_logic_vector(127 downto 0) ); END DSP_core_for_FFTs; Pretty clear to me. > > I thank the posters who talked about or asked about what tools are out > there; for both the OP's entirely valid complaint on the current > offerings; and for the other category of tools to -extract- readable > schems from HDL code. > > I'm a firm believer in HDL as well, and would love to have a toolset > for automated schema production from code; but in no way does plain > text replace visual/graphical display of information, imho. > > I'm unconvinced that with today's multi-million gate designs that schematics have much of a role. The designs are so complex that a language based description is the way the design must be specified. I agree totally that the FPGA designer should have a 'view' of the basic hardware they're trying to fit their design into, I don't think that has to be a bunch of rectangles with legs on! Some well placed block diagrams can do wonders to illustrate connectivity and function, but well written text is better! Finally, if anyone wants to refute any of the points I've made in this post, I'd appreciate it if they could express their points in a viewlogic schematic. Thanks. :-) Cheers, Syms. p.s. OK, that whole post was deliberately provocative, I know YMMV. I don't want to tell others how they should design, everyone has their own way of doing stuff, this is just my POV!Article: 97335
hi I am having a problem in PAR during Assemble stage in Partial configuration. Suddenly PAR is halted, without error, saying that ----------------------------------- par.exe has encountered problem and needs to close AppName : par.exe, ModName : libgi-guide.dll Offset : 000003556 ------------------------------------ Initial budgeting and Module activation are quite ok. Moreover, everything is okay in modular design setting. I am using ISE 6.3.03i, Windows XP, I wonder this is - guided PAR problem : for example, signal naming problem - My design problem : for example, pseudo driver - My machine problem : for example, OS and computer problem Thankyou in advance for your commentArticle: 97336
Hi, When I look in the spartan 3 datasheet I can't find any SSO(simultaneously switching ouput) guidelines for the CP132 package. The datasheet has it for VQ100, TQ144 and all the FT/FG BGA packages. But nothing for the CP132. Would it be similar to the BGA packages? Actually I'm not too sure how the xilinx chip scale package construction(and hence pin inductance) is different than a xilinx BGA....? http://www.semiconfareast.com/csp.htm provides the following definition: Chip Scale Package, or CSP, based on IPC/JEDEC J-STD-012 definition, is a single-die, direct surface mountable package with an area of no more than 1.2 X the original die area. The acronym 'CSP' used to stand for 'Chip Size Package,' but very few packages are in fact the size of the chip, hence the wider definition released by IPC/JEDEC. The IPC/JEDEC definition likewise doesn't define how a chip scale package is to be constructed, so any package that meets the surface mountability and dimensional requirements of the definition is a CSP, regardless of structure. For this reason, CSP's come in many forms - flip-chip, non-flip-chip, wire-bonded, ball grid array, leaded, etcArticle: 97337
>The background for my question whether an FPGA is viewed as software or as >hardware comes from the regulations for medical devices. I'd call it firmware, the idea being that it's easier to change than hardware but harder to change than software. But it depends on who I'm talking to. If I wanted to be specific for somebody familiar with the gear we are discussing, I'd probably call it "the code for XXX" where XXX is the name of the particular FPGA. I've worked on old microcoded machines that had some RAM for the microcode. It was rare, but a couple of good hacks compiled (micro) code on the fly and loaded it ... I might call a simple ROM based state machine "firmware" because I'd probably write a program to generate the contents of the ROM. For anything but the simplest sort of state machine, it's easier to hack an existing microcode assembler to do what you want and then actually write the code in your new assembly language. >From the perspective of regulatory requirements it makes a difference >whether an FPGA is viewed as software or as purely hardware because of >regulatory requirements for medical devices are more comprehensive when >software is involved. Ask your lawyer. Do the regulations say anything interesting about simple software vs complicated software? Are you doing anything complicated in your FPGA? Is it just glue logic, or do you have complicated state machines? Software people in general (both coders and managers) have a reputation for shipping buggy crap. Hardware guys have a reputation for doing (much) better. From what I've seen, it'a a culture thing. Hardware projects tend to be expensive to fix so management tends to encourage doing it right. That message gets passed around to everybody working on a project. I think it's possible to build good software. It doesn't happen very often. I've never worked on a safety critical project. There is also the complexity issue. It's easier to build a tower of cards out of software. You can make it do something useful most of the time without paying any attention to the complicated corner cases. -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 97338
Apologies embarassing mistake. I had V1.5 of the datasheet. V1.6 of the datasheet has this information. Very quick response from the Xilinx hotline, < 1hr. Was a bit of a foolish question though. Andrew FPGA wrote: > Hi, > When I look in the spartan 3 datasheet I can't find any > SSO(simultaneously switching ouput) guidelines for the CP132 package. > The datasheet has it for VQ100, TQ144 and all the FT/FG BGA packages. > But nothing for the CP132. Would it be similar to the BGA packages? > > Actually I'm not too sure how the xilinx chip scale package > construction(and hence pin inductance) is different than a xilinx > BGA....? > > http://www.semiconfareast.com/csp.htm provides the following > definition: > Chip Scale Package, or CSP, based on IPC/JEDEC J-STD-012 definition, > is a single-die, direct surface mountable package with an area of no > more than 1.2 X the original die area. The acronym 'CSP' used to stand > for 'Chip Size Package,' but very few packages are in fact the size of > the chip, hence the wider definition released by IPC/JEDEC. > > The IPC/JEDEC definition likewise doesn't define how a chip scale > package is to be constructed, so any package that meets the surface > mountability and dimensional requirements of the definition is a CSP, > regardless of structure. For this reason, CSP's come in many forms - > flip-chip, non-flip-chip, wire-bonded, ball grid array, leaded, etcArticle: 97339
Hub van de Bergh wrote: > The background for my question whether an FPGA is viewed as software or as > hardware comes from the regulations for medical devices. > Please read the very front of every Xilinx Data Book, where legal issues like copyright etc are mentioned on page 2, even before the table of content. The next-to-last paragraph says: "Xilinx products are not intended for use in life support applications. Use of Xilinx products in such applications without the written consent of the appropriate Xilinx officer is prohibited." This of course has to do with legal liabilities. So it is serious stuff. Peter Alfke, Xilinx Applications (from home)Article: 97340
Have you considered a power series approximation? I use that for sin/cos and it works quite well over a small range. On Mon, 20 Feb 2006 22:09:50 GMT, mk<kal*@dspia.*comdelete> wrote: >Hi everyone, >I am currently using cordic for arctangent in my system. I need at >least 12 bits of accuracy. My x,y inputs are less than |1| in the form >of 2.16. I have made some optimizations to another block in my system >and it turns out that I have 6 extra slots of 32x32 multiplier >resource but usage of 4 would be better (because of scheduling >constraints). I don't have any dividers of course. > >Are there any algorithms which would need very few/small roms (which >can be implemented in combinational logic) and 4 or so multipliers >(and some adder/subtracters) ? Again I can't afford any dividers. >Any suggestions are welcome.Article: 97341
But don't be too scared.. every single electronics company on the planet has the same disclosure statement. The best bet is look at aerospace everything is triplicate.. and there is hardware safety interlocks too. FPGA's like software are still subject to gamma and single bit events. Even if its one in a million.. you don't want to be that millionth. Just don't forget.. with medical applications you had better be bloody sure it works.. or it will be bloody for sure. Simon "Peter Alfke" <alfke@sbcglobal.net> wrote in message news:1140494163.600023.137960@o13g2000cwo.googlegroups.com... > > Hub van de Bergh wrote: > > The background for my question whether an FPGA is viewed as software or as > > hardware comes from the regulations for medical devices. > > > Please read the very front of every Xilinx Data Book, where legal > issues like copyright etc are mentioned on page 2, even before the > table of content. > The next-to-last paragraph says: > > "Xilinx products are not intended for use in life support applications. > Use of Xilinx products in such applications without the written consent > of the appropriate Xilinx officer is prohibited." > > This of course has to do with legal liabilities. So it is serious > stuff. > Peter Alfke, Xilinx Applications (from home) >Article: 97342
Simon Peacock wrote: > But don't be too scared.. every single electronics company on the planet has > the same disclosure statement. The best bet is look at aerospace everything > is triplicate.. and there is hardware safety interlocks too. FPGA's like > software are still subject to gamma and single bit events. Even if its one > in a million.. you don't want to be that millionth. > Just don't forget.. with medical applications you had better be bloody sure > it works.. or it will be bloody for sure. > > "Xilinx products are not intended for use in life support applications. > > Use of Xilinx products in such applications without the written consent > > of the appropriate Xilinx officer is prohibited." The problem is that the regs require similar levels of documented development processes for life support as they do for any supporting medical electronics, such as imaging products. The Xilinx and other disclaimers for life support, are just a small fraction of the equipment subject to these regs in many places. I spent some time working in an MRI shop 10 years ago, I was amazed at the regs. I've seen my share of crap hardware designs go out the door in 35 years, pushed by managers with their feet held to the fire for shipment schedules, so it's just not a software problem. The problem with most large software projects, is that the logic complexity is two to four orders larger than a very large hardware design. Corner cases sneek thru in both hardware and software designs. In the old days you would see a board escape to production in a few revisions. Large ASIC projects burn a few more mask revs than people would like, but also fail the simulator pretty frequently for every mask rev .... and we see large ASIC projects with design errata that are never corrected too. It's because the software complexity is so much higher, as well as part of the industry adopting low quality standards for both hardware and software, that it also makes sense to focus on documented development process for both, especially software .... not the quality of engineers working on it.Article: 97343
>The problem is that the regs require similar levels of documented >development processes for life support as they do for any supporting >medical electronics, such as imaging products. How many people know the story of the Therac-25? Is that life support gear? Imaging? -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 97344
> When I capture some data I have unexpected activity on my signals, > including the 3 bits that are tied to 0 in the VHDL!!! > Anyone else experienced this? a collegue had this problem last week (but I'm not sure about the version) ... (I hope I remember correctly) If you change the port connections with the core inserter tool, then the analyzer sometimes fails to update the names and port numbers ... you end up with some signals toggling that should not and your trigger beeing constant ... AFAIR a new analyzer project could help ... bye, MichaelArticle: 97345
Peter Alfke wrote: > Please read the very front of every Xilinx Data Book, where legal > issues like copyright etc are mentioned on page 2, even before the > table of content. > The next-to-last paragraph says: > > "Xilinx products are not intended for use in life support applications. > Use of Xilinx products in such applications without the written consent > of the appropriate Xilinx officer is prohibited." > > This of course has to do with legal liabilities. So it is serious > stuff. Generally the data books and data sheets for ALL electronic components say that. Yet somehow life support devices manage to include electronic components. Once Xilinx (or any other vendor) sells you a chip, their claim that its use in a life support application is "prohibited" doesn't have any more weight than if I sold pencils and said that their use to write the answers to homework problems is prohibited. When the life support system fails, and the heirs sue the company that made it, the doctor, the hospital, and the vendors of the components of the system, Xilinx can point to that paragraph and tell the jury "we tried to keep our customers from using our chips in life support systems." The jury may or may not find that paragraph to reduce Xilinx' liability. Personally, were I on a jury, I'd give that paragraph very little weight. After all, the life support system has to be made out of *something*. That said, however, if a particular component failed and resulted in harm to a person, I'd ask why the designer of the life support system hadn't designed the system to be fault tolerant without that component being a single point of failure. EricArticle: 97346
Thanks for the reply, I am not actually using the Core Inserter Tool. I create the ICON core and the ILA core using the Core Generator, then manually instanciate them into my design and wire them up. At this point I then use the Core Analyzer to set up the trigger and capture the data. SimonArticle: 97347
"Andy" <jonesandy@comcast.net> wrote in message news:1140467697.778272.78510@g47g2000cwa.googlegroups.com... > We have an application where we need 4 phase sampling of a differential > (LVPECL) input. I think we found a way to use the IDDRs from the IOBs > associated with both + and - differential pads on V4 devices to > accomplish this. > Andy, Neat solution!! You might like to check out Brian's post entitled "DIFF_OUT buffer example". Cheers, Syms.Article: 97348
Brendan Illingworth wrote: > Thank you for the response, it was reassuring. I am in the process of > taking a closer look at the timing margins at the controller on our Agilent > infinium (2.25Ghz BW) scope and hopefully will be able to verify your > suspicion. Additionally I have a comment; very rarely when the read cycles > fail I only see one single rising edge on DQS as opposed to the four I would > expect for an 8 burst read. I know that I am "catching" all the DQS > transisitions as I am looking at the line over multiple read cycles. Have > you seen this behavior before? > > Regards, > Brendan If you are seeing only one (or less than a burst) of DQS on reads, then the cycle is being truncated by external command, one way or another. One common problem that you have to be aware of is the clock *at the DDR device* is defined by the CLK and #CLK crossing, not the 50% transition. Be sure you are within the spec for clock crossover. The other possible culprits (according to my habdy DDR datasheet) are the command bits (See http://download.micron.com/pdf/datasheets/dram/ddr/512MBDDRx4x8x16.pdf for a typical DDR part) - see table 6 is a possible burst terminate command (which would do precisely what you are seeing). As the problem exacerbates with temperature, I would be suspicious of this, and the timing from CLK/#CLK to the command bus. Cheers PeteS > > > "PeteS" <ps@fleetwoodmobile.com> wrote in message > news:1140445121.929435.112130@g47g2000cwa.googlegroups.com... > > Brendan Illingworth wrote: > > > Hi All, > > > > > > This post pertains to a DDR SDRAM controller that works perfectly for > 95% of > > > DIMMS used, and is part of a test system that contains a 2+ GHz > oscilloscope > > > monitoring the clock, command, and dqs signals at the DIMM pins. > > > > > > Several DIMMs seem to operate incorrectly only occassional (possibly > > > temperature dependent), and I suspect that the issue is some type of > timing > > > requirement that is on the edge of met and violated. When the incorrect > > > behavior is seen the scope verifies that write cylces are operating > > > correctly but that the read cycles are not operating correctly. Durring > the > > > read cycles the active command is issued, then the read command > (satisfying > > > Trcd) and accordingly the DQS signals are provided by the DIMM. Here is > the > > > catch; bursts of 8 was specified durring initialization and the DQS > signals > > > durring correct operation correspond to this choice; however, but > durring > > > the failing read cycles I will see sometimes 1 DQS rising edge, > sometimes 2 > > > and sometimes even 5 rising edges! My questions are as follows: > > > > > > 1) Has anyone with DRAM exprience seen any behavior like this before? > > > > > > 2) ** What might cause a DDR module to provide a varying number of DQS > > > strobes? ** > > > > > > Recall that the design has no blatent errors since for most DIMMs no > errors > > > are ever seen. > > > > > > Thanks for any thoughts, > > > > > > Brendan Illingworth > > > > Most DDR problems are on the read cycle. > > > > When you do a read, DDR (unlike other memory) drives not only the bus > > but also the strobes (as you are clearly know). What you may not know > > is the amount of variance there can be between devices on the DQS <-> > > data jitter and DQS (read) - clock variance. > > > > For those DIMMs you are having problems with, the issue is very likely > > that DQS is being asserted outside your timing window (so the device > > is, in fact, asserting a burst of 8 - you simply are not catching it). > > > > I have seen this behaviour using commercial parts with DDR controllers > > built in - the programmable parameters such as DQS skew being there to > > get around issues just such as this. > > > > The causes for this are usually either poor clock/data termination (and > > in DDR you have to have something even if it's only series, which is > > do-able in point to point), excessive round trip delay on the clock, > > excessive added data skew (across D0-Dn vs. DQS) or excessive skew on > > DQS vs. clock (which moves DQS as received away from the clock) as seen > > at the receiver. > > > > In addition, DDR is not specified to operate below certain speeds, > > although *most* will. This is a classic gotcha where the internal DLLs > > won't properly lock to generate read DQS appropriately. > > > > The temperature dependency is perhaps to be expected - all logic > > devices tend to operate a little slower (edge response in particular, > > but propagation delay tends to increase as well), thus increasing the > > amount of time before a return DQS is asserted. > > > > My first place to look would be DQS skew timing (remembering that the > > memory device will assert DQS at the leading edge of data - it's up to > > the controller to determine how to use it to grab the data) in the > > controller. > > > > It took me almost as long, perhaps longer, to set the rules for laying > > out DDR (I put down 3 independent 200/400 systems on one board) as it > > did to get it laid out. Even then, my timing budget had about 200 > > picoseconds of guaranteed margin - just what the guaranteed margin is > > on any particular DIMM is I can't say without a datasheet handy. > > > > Cheers > > > > PeteS > >Article: 97349
> Yes, that too; the other trick shown in the example is how to keep >the two IOB DDR clock nets identically loaded by splitting the internal >logic clock loads out onto another BUFG net. Does this run into skew problems between the main clock and the IOB clock? -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z