Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Thanks to both of you for the reply but as I'm working with a university, in order to order some components I need an estimate and moreover the university can pay with bank transfer only 30 days after the recipt of the goods...so I don't think that i can buy on xilinx web site. Guido "Alex Gibson" <me@privacy.net> wrote in message news:<2rngkvF1chsfiU1@uni-berlin.de>... > "Jon Beniston" <jon@beniston.com> wrote in message > news:e87b9ce8.0409231030.326d0b2a@posting.google.com... > > gvaglia@gmail.com (Guido) wrote in message > > news:<44f5f440.0409230231.4e64da8c@posting.google.com>... > >> Hi all! > >> My Xilinx Parallel Cable IV was lost during a travel and it needs 5 > >> weeks to receive another from Memec. > > > > You can order one from Xilinx's web site. They ship internationally. > > They got me mine in a couple of days. > > > > Cheers, > > JonB > > Xilinx's shipping is very good (who ever they use). > Ordered on a tuesday night recieved the order on the thursday morning. > Not bad from the US to Sydney, Australia. > > AlexArticle: 73701
Hello, I want to compare two designs, one of which is written in verilog while the other one is in vhdl. the testcases are also written in verilog. while running the vhdl design, (using XST VHDL) .. the design compiles without error, but i couldnt figure out a way to generate the simulation netlist in verilog for the design. does there exist any such switch using which i can generate a verilog simulation netlist for designs compiled with XST VHDL thanks in advance., regards Varun Jindal.Article: 73702
Rune, Have you run the timing analyser to find out which path is failing? The '(Levels of Logic = 9)' implies that it's not the carry chain, which, for a 28 bit counter, would be about 14 levels, 2 bits/level. I suggest you run the analyser and post your failing path. Cheers, Syms. "Rune Christensen" <rune.christensen@adslhome.dk> wrote in message news:41596bb8$0$257$edfadb0f@dread12.news.tele.dk... > > > I have already done that but my goal is 8 ns and a simple implementation > gives > > Slack: -1.380ns (requirement - (data path - clock path skew > + uncertainty)) > Source: cnt_0 (FF) > Destination: cnt_3 (FF) > Requirement: 8.000ns > Data Path Delay: 9.373ns (Levels of Logic = 9) > Clock Path Skew: -0.007ns > Source Clock: clk_BUFGP rising at 0.000ns > Destination Clock: clk_BUFGP rising at 8.000ns > Clock Uncertainty: 0.000ns >Article: 73703
I am working on a Xilinx V2P4 FPGA design with an PLB EMC. I can execute programs from 16bit wide SRAM when they are downloaded with Xilinx's XMD. Now i wrote a bram bootstrap loader that copies a fw image from flash into sram and then jumps to it. the STRANGE thing is that this jump most often does not work at all. the powerpc stalls at location 0xffff0700. a jump instruction seems to cause the problem. has anybody ideas how to solve our problem? how can we detect the source problem? regards -hansArticle: 73704
Hi I am currently developing a Co Processor which performs some arithmetic operations more efficiently, and I want to test this within a processor. I have done this once with den LEON Processor, but now I wanna try it with Microblaze or Power PC Processor if this is possible. I use VHDL as design language, so is there some online tutorial which explains me how I can add my Co Processor and the new op-Codes which should be performed by the Coprocessor. Unfortunately I wasnt able to find any information about this on the web Cheers RogerArticle: 73705
Let me first say thank you for your responses. To address some of the questions / comments: I realize that not everyone will extract the same application or value from the on-chip NV memory, however, since it has the potential to provide or support different applications, general enough value may be justified for inclusion. Answers / Comments for Jim and Hal: a. bad bits: built in ECC would make bad bit transparant to the user b. speed = assume approximately 25ns access time c. width = flexible, 16/32/64 bit d. secure code storage would come by two assumptions: i) on-chip processor preventing external need to access the memory. ii) readback via JTAG etc of Memory would be programmed as disabled by the user e. on-chip charge pumps require no special external voltage supply f. user application would be able to log/write data to NV memory for storage g. external applications could read NV data too, but you obviously loose security h. OTP = nonerasable i. no tradeoff with NV and Ram like functionality - it would be a standard feature, not swappable block within the silicon family j. yes for non secure, it would enable integration into the FPGA of on-board NV data for some applications (mature s/w code for example). Realize that also for some applications, you can create a sense of "virtual" unlimited multiwrite capability. To demonstrate what I mean by Virtual multi-write, I'll use the following example. Let's say that you know you may need to write 10,000 maximum lifetime events at 100 bits data each but do not need to keep history for more than 16 events. As a designer, you could buy a tiny block of flash or EEPROM of 1.6kbits and keep writing over the older events. Or, with OTP, you could simply purchase enough OTP memory to store the entire lifetime worth of events. Many applications that store NV data can quantify the lifetime of storage from a practical specification. One example, Televisions need to store user configuration (like: color, tint, favorite channels etc), and need to provide the user with unlimited adjustments of this data. However if you make some assumption, the design can calculate an upper limit of needed storage. For example, lets say 512bits stored, TV life is 20 years. I would venture to guess that if you assumed that data storage would occur Max once a day for each of 20 years, you would be happy to remove the $1 EEPROM from the board as long as the on-chip cost of the NV block was less. 20yr*512b*365days = 7Mbit total Any other creative thoughts on how this could be used would be appreciated. Thanks. Message 2 in thread From: Jim Granville (no.spam@designtools.co.nz) Subject: Re: NV on-chip memory? View this article only Newsgroups: comp.arch.fpga Date: 2004-09-27 21:56:28 PST Guy wrote: > quick survey... > > Would it be of value to provide cheap on-chip one time programmable > memory in an FPGA like Cyclone II? Yes. > Say 1-10Mbit depending on density. > > It would be field or user programmable either via a programmer (very > fast) or by user logic. Can you clarify - would this need Supra Voltages/Currents, or can the FPGA program itself, while running. ( eg could an NV event log be created ? ) > It would be very secure (anti-copy) for: > secure s/w code with on-chip processor > secure data storage > configuration data(s) > etc. > > Please share your thoughts on the value (I would pay X% premium) and > the type of ways you would use it? First one is easy, X = 0 :) > > Thanks in advance JIM GRANVILLE POSTED: An interesting question. From a wider industry perspective, there is a trend in the Microcontroller sector, to offer MASK devices, where a couple of years ago the party line was 'we will now only make FLASH'. This must be driven by a number of factors, and clearly customer demand, because of ** MASK is always going to be lower cost (fewer process steps, and testing) ** Zero risk of field bit erasure ** No programming needed at customer end. ** Some apps stipulate that high voltage must be needed for program, so they want to _guarantee_ errant software cannot jump into the FLASH loader Block erase routines, for example :) Designers love Flash, because they can change it easily, but there are many products that are code-stable, and the bean-counters prefer the lowest cost options. So, back to your Cheap OTP memory in FPGA question ? First step would be to give every device a 128 bit unique ID, that could be used for seed, and ID usage. Users would access that 'for free' in their present flows. Next step is more questions : What is the yield of this memory ? All OTP memory has failures, so what does the user do, if this occurs - remove the FPGA from a expensive PCB ? Speed and Width of the memory ? Can it be easily swapped for (say) off chip NV memory, or on Chip RAM ? Market Model here is then the same as for MASK uC. Product development is using NV memory, and when proven stable, the lower price option is chosen. This applies to both CODE and FPGA CONFIG memory spaces. For configuration data, I would guess the cost of this memory would start to be significant, which would push up the device variant price, and impact those users who never used this feature. I would imagine, provided it was easy to develop with, that OTP memory would expand the usage of the simplest soft processors, for state engines, and stable-code layers. Quite small OTP memory could be usefull here, << 1MBit, which should have minimal impact on device cost ? -jg Post a follow- hmurray@suespammers.org (Hal Murray) wrote in message news:<eMWdnbmEWOAXbsXcRVn-pQ@megapath.net>... > >It would be very secure (anti-copy) for: > > secure s/w code with on-chip processor > > secure data storage > > configuration data(s) > > I'm missing something. How can data be both secure and > useful? If I can read it back out to use it, clearly the bad guy > can too. It might take him a while to write some code to get it. > That doesn't seem like a big deal compared to the recent discusions > on extracting the encryption key for the configuration bit stream. > > Same general idea for secure code. You might be able to make > that secure if you had a dedicated memory that was only good > for code. But that seems against the general idea of flexibility > that makes FPGAs so interesting. Besides, it would probably be > a pain to debug - or the debugging tools could be used to > read the code memory. > > > One time secure configuration data might be interesting. > I'm not sure how much I would pay for it. Not much for > anything I can think of right now.Article: 73706
On Tue, 28 Sep 2004 09:21:13 -0700, Varun Jindal wrote: > Hello, > > I want to compare two designs, one of which is written in verilog > while the other one is in vhdl. the testcases are also written in > verilog. while running the vhdl design, (using XST VHDL) .. the design > compiles without error, but i couldnt figure out a way to generate the > simulation netlist in verilog for the design. does there exist any > such switch using which i can generate a verilog simulation netlist > for designs compiled with XST VHDL > > thanks in advance., > > regards > Varun Jindal. ngd2ver converts the ngd file to verilog ngd2vhdl converts the ngd file to VHDL You don't need to synthesize the designs to simulate them together, any decent simulator like NCverilog can simulate Verilog and VHDL together. Take the two sources and put them in the same testbench with some comparison logic.Article: 73707
In article <a11322d6.0409271941.e71e499@posting.google.com>, Guy <guys@altera.com> wrote: >quick survey... > >Would it be of value to provide cheap on-chip one time programmable >memory in an FPGA like Cyclone II? >Say 1-10Mbit depending on density. Would it slow down the fab or up the cost? >It would be field or user programmable either via a programmer (very >fast) or by user logic. > >It would be very secure (anti-copy) for: > secure s/w code with on-chip processor > secure data storage > configuration data(s) > etc. There is already a more secure mechanism for this: SRAM-based encryption keys used to load encrypted bitfiles. That mechanism can be used to bootstrap a large non-volatile store, with a keystone of the SRAM-based encyption key which is probably significantly harder to reverse/crack than on-chip static bits. Thus the "savings" by putting it on-chip are not security, but the cost savings of not needing a large external Flash PROM. -- Nicholas C. Weaver. to reply email to "nweaver" at the domain icsi.berkeley.eduArticle: 73708
What's this Read First or Write First on Xilinx BRAM about?Article: 73709
"Brad Smallridge" <bradsmallridge@dslextreme.com> wrote in message news:10lj8q5o44cr9f2@corp.supernews.com... > What's this Read First or Write First on Xilinx BRAM about? If you write to mem[addr] the same clock edge that you read mem[addr] on that same port, what do you want for the result? Do you want the old data before the write as if it were a register (read first) or the brand new data that's in process of being written to the memory array (write first)? The Xilinx Software Manuals (library guide) have the tables that try to explain the nuances.Article: 73710
So, I'm reading and writing on the same clock. I suppose if I wanted to do this, maybe for speed, I would want the read to be the old data before the write. Can't think of any reason I would want the new data going in. I already have it.Article: 73711
Rune Christensen wrote: > I need information about creating a fast adder (n = n + 1) > and a fast equal (a = b). (both 28 bit signals) > I'm going to create a fast binary counter in a FPGA Spartan II > Would it be faster to create several 4 bit LFSR and a converter to binary > than a normal binary counter? > I need the binary output because it's a timestamp For a timestamp, why can't you write out the value of the LFSR, and decode it later at data analysis time? The sequence is predictable. 28 bits is a lot, though. How about two smaller LFSR's, with relatively prime cycle lengths. A small lookup table would convert each to a binary count, and then you can easily figure out the actual count. -- glenArticle: 73712
Xilinx seems a little weak on FIFOs. Before I switched over I was working on Cypress chips and they had drop-in FIFOs with self-addressing and all the flags (bells and whistles). Is this true or have I just not discovered the right ap note yet?Article: 73713
You can choose a XIlinx core for the FIFO or you can get a Virtex-4 with the FIFO built into the BlockRAM. "Brad Smallridge" <bradsmallridge@dslextreme.com> wrote in message news:10ljaiji17fpe9@corp.supernews.com... > Xilinx seems a little weak on FIFOs. Before I switched over I was working > on Cypress chips and they had drop-in FIFOs with self-addressing and all the > flags (bells and whistles). Is this true or have I just not discovered the > right ap note yet?Article: 73714
"Brad Smallridge" <bradsmallridge@dslextreme.com> wrote in message news:10ljabn8ca1e8f6@corp.supernews.com... > So, I'm reading and writing on the same clock. I suppose if I wanted to do > this, maybe for speed, I would want the read to be the old data before the > write. Can't think of any reason I would want the new data going in. I > already have it. A fall-through FIFO is one example where the data going in might be needed on the output. If the memory is for storage of values, why not use the most recent value with a write-first? Fixed delay-lines are easy to implement with a single address with read first.Article: 73715
What about a FIFO? If it's empty, you want the first data in to appear right away. Cheers, Syms. "Brad Smallridge" <bradsmallridge@dslextreme.com> wrote in message news:10ljabn8ca1e8f6@corp.supernews.com... > So, I'm reading and writing on the same clock. I suppose if I wanted to do > this, maybe for speed, I would want the read to be the old data before the > write. Can't think of any reason I would want the new data going in. I > already have it. > >Article: 73716
Bob Perlman <bobsrefusebin@hotmail.com> wrote in message news:<p03hl0tj9t776u2i0jnrl02oecn5g7lqjv@4ax.com>... > I wouldn't be too quick to dismiss Stuart Sutherland's reference > guide. I use the printed version all the time, and it's extremely > handy. Agreed, although it assumes that you know the language and need a cheat-sheet. Of course, that cheat-sheet comes in handy all the time! That's why it's always on top of the monitor or someplace else nearby. -aArticle: 73717
Any Virtex BlockRAM always performs a "free" read operation whenever you do a write. The data appearing at the data output is either the data previously stored at that location (and about to be oberwritten), or it is the data you are just writing, or the data output does not change, keeps holding its old value. You pick one of these three choices by configuration. The Virtex-4 FIFO has an optional "fall-through" mode, where data written into an empty FIFO immediately appears on the read output port. Responding to a different thread: The Virtex-4 FIFO generate FULL and EMPTY flags and ALMOST FULL and ALMOST EMPTY flags, all internally synchronized (rising and falling edges) to the relevant clock domain. We just finished testing asynchronous operation at 500 MHz read clock rate, with no error in >10e14 "going empty" cycles. Peter Alfke, Xilinx Applications > From: "Symon" <symon_brewer@hotmail.com> > Newsgroups: comp.arch.fpga > Date: Tue, 28 Sep 2004 11:28:05 -0700 > Subject: Re: Xilinx Read First Write First > > What about a FIFO? If it's empty, you want the first data in to appear right > away. > Cheers, Syms. > "Brad Smallridge" <bradsmallridge@dslextreme.com> wrote in message > news:10ljabn8ca1e8f6@corp.supernews.com... >> So, I'm reading and writing on the same clock. I suppose if I wanted to > do >> this, maybe for speed, I would want the read to be the old data before the >> write. Can't think of any reason I would want the new data going in. I >> already have it. >> >> > >Article: 73718
It's mainly for compatibility with the behavior of the first BlockRAMs. Nobody suggests that you should use it in newer designs... Peter Alfke > From: "Brad Smallridge" <bradsmallridge@dslextreme.com> > Organization: Posted via Supernews, http://www.supernews.com > Newsgroups: comp.arch.fpga > Date: Tue, 28 Sep 2004 11:11:48 -0700 > Subject: Re: Xilinx Read First Write First > > So, I'm reading and writing on the same clock. I suppose if I wanted to do > this, maybe for speed, I would want the read to be the old data before the > write. Can't think of any reason I would want the new data going in. I > already have it. > >It's forArticle: 73719
Brian Davis wrote: > > Steven Knapp wrote: > > > > The limitation is if ... > > > > * The VCCO supply ramps faster than the minimum > > data sheet specification (Tcco) > > > One question: is this strictly a power-up issue, or can it be > triggered during operation? > > In particular, since the ramp spec. is worse for the leaded > packages, could a large transient on the VCCO supply cause > the same problem? > > e.g., if you configure a PQ208 with many parallel DCI > terminations, at the end of configuration the VCCO supply will > jump instantly from quiescent to full power ( maybe ~3 amps max. > for a PQ208, but by that point you'd have heatsinking problems ) I don't think this is a problem. The ramp speed issue in only of concern at a voltage threshold around 0.8~1.0 volts. As it was explained to me, when the part powers up, there are a lot of transistors which are turned on initially. Once Vcc gets above about 1.0 volts, everything is biased and the transistors that need to be off are off. But the part draws a lot of current in the meantime as the voltage ramps. If the voltage ramps too fast, the PS can max out on current and for some reason, this will disturb the part and it will not initialize correctly to the point that a power down must be done to correct the problem. So the problem is that you must let the part draw as much current as it needs as the voltage ramps up. The spec is to let you know how much current you will need for a given ramp rate. Keep the ramp rate slower than the worst case spec and the part will be happy with the current spec'd in the data sheet. After the voltage rises above about 1.0 volts this is no longer an issue regardless of how the spec is written (or interpreted). -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 73720
I wonder how you have connected the SRAM to microblaze? In particular please describe the low order bits of the address bus from the processor, and the low order address bits of the SRAM. I am guessing you have CPU addr bit 0 to SRAM addr bit 0. This is not correct. On Tue, 28 Sep 2004 11:02:30 +0100, Ben G <nospam@nospam.nospam> wrote: >I had tried using a char pointer as you suggested but the values I want >to write out are 32 bit integers If you only do word (4 byte) reads and writes, you probably need something like this: Address pins CPU SRAM 0 no connection 1 no connection 2 0 3 1 4 2 etc. If you need byte reads and writes, then your SRAM must support this, and you use the CPU address bits 0 and 1 to select the byte. Philip >when I use a char pointer the correct value is not written to SRAM. Because the C compiler probably generated a byte write, not a word write >Mem[80F0007E]=126 > >Mem[80F0007F]=127 Neither xxxx7E or xxxx7F are word addresses. Word addresses always have the lower 2 bits set to 0, and are not used, as per the description I gave above. >If I use a pointer but set the addresses manually as below it works. I >set the value of i to a large integer value and then increment it for >each location but set the address directly in the code. The following code certainly matches my guess that you have not connected the address bus up correctly. That is, you are issuing word writes because you have declared sram_addr_data as an address to an int, yet you are using addresses that are byte addresses (non 0 for two lsbs) > >> >i=268435456; >sram_addr_data=(int *) 0x80F00000; >*sram_addr_data = i; >i++; >sram_addr_data=(int *) 0x80F00001; >*sram_addr_data = i; >i++; >sram_addr_data=(int *) 0x80F00002; >*sram_addr_data = i; >i++; >sram_addr_data=(int *) 0x80F00003; >*sram_addr_data = i; >i++; I think the thing you need to grasp is that both byte and word accesses use the same address bits. For words, only every 4th address is valid. i.e. addresses 0, 4, 8, 12, 16, 20, .... This is called aligned access You can also do byte addressing with these values, and all the ones in between. Most RISC CPUs run this way. Other CPUs, such as the x86 devices allow un-aligned access, such as a word access from 3, but what actually happens is that the CPU does 2 accesses, to 0 and 4, and then assembles the word from byte 3 in the first word, and bytes 4, 5, and 6 in the second word. Most RISC CPUs dont do this, including microblaze. >This code results in the correct display in the terminal window but if I >try to put this in a loop to do it automatically how do I get the >pointer address to increment the way I need. If the pointer is an int * then the following pointer++ will be incremented by 4, which will point to the next word. If the pointer is an char * then the following pointer++ will be incremented by 1, which will point to the next byte. >The SRAM device I am connecting to has a 19 bit address bus and a 32 bit >data bus. Am I right in thinking I should have 2^19 i.e. 524288 >addresses each capable of holding 4 bytes therefore 2097152 bytes. Yes >This >corresponds to the documentation which says the device is 2M. Using the >code in my origibal post I am only getting access to a quarter of this. Because I think you have not connected the address bus up correctly. Philip Philip Freidin FliptronicsArticle: 73721
"Brad Smallridge" <bradsmallridge@dslextreme.com> wrote in message news:<10lhe4ms1oogbaa@corp.supernews.com>... > I have an out pin (it's the > Write*/Read SRAM line) and want to read it internally (one use would be to > control the tristate data outputs). If I need to read an OUT port from the same process that drives it, then I have written a state machine that lacks a process variable or architecture signal to maintain the local state value. For a simple entity example, a port output may be exactly the same as the local state value. In this case, declaring an extra variable or signal might seem annoying. However, for an industrial-strength entity, this is seldom the case. Processes maintain lots of state values that are only needed locally. For example, imagine a process that watches an single bit IN and drives single port bit OUT with a '1' when the input frequency is within a certain range or '0' otherwise. -- Mike TreselerArticle: 73722
"Symon" <symon_brewer@hotmail.com> skrev i en meddelelse news:2rti2pF1e4a0qU1@uni-berlin.de... > Rune, > Have you run the timing analyser to find out which path is failing? The > '(Levels of Logic = 9)' implies that it's not the carry chain, which, for > a > 28 bit counter, would be about 14 levels, 2 bits/level. I suggest you run > the analyser and post your failing path. > Cheers, Syms. > "Rune Christensen" <rune.christensen@adslhome.dk> wrote in message > news:41596bb8$0$257$edfadb0f@dread12.news.tele.dk... >> >> >> I have already done that but my goal is 8 ns and a simple implementation >> gives >> >> Slack: -1.380ns (requirement - (data path - clock path > skew >> + uncertainty)) >> Source: cnt_0 (FF) >> Destination: cnt_3 (FF) >> Requirement: 8.000ns >> Data Path Delay: 9.373ns (Levels of Logic = 9) >> Clock Path Skew: -0.007ns >> Source Clock: clk_BUFGP rising at 0.000ns >> Destination Clock: clk_BUFGP rising at 8.000ns >> Clock Uncertainty: 0.000ns >> > > ================================================================================ Timing constraint: TS_clk = PERIOD TIMEGRP "clk" 8 nS HIGH 50.000000 % ; 3436 items analyzed, 17 timing errors detected. (17 setup errors, 0 hold errors) Minimum period is 8.347ns. -------------------------------------------------------------------------------- Slack: -0.347ns (requirement - (data path - clock path skew + uncertainty)) Source: cnt_0 (FF) Destination: cnt_19 (FF) Requirement: 8.000ns Data Path Delay: 8.337ns (Levels of Logic = 11) Clock Path Skew: -0.010ns Source Clock: clk_BUFGP rising at 0.000ns Destination Clock: clk_BUFGP rising at 8.000ns Clock Uncertainty: 0.000ns Data Path: cnt_0 to cnt_19 Location Delay type Delay(ns) Physical Resource Logical Resource(s) ------------------------------------------------- ------------------- CLB_R23C3.S0.YQ Tcko 1.292 cnt<1> cnt_0 CLB_R28C2.S0.F1 net (fanout=4) 1.800 cnt<0> CLB_R28C2.S0.COUT Topcyf 1.486 _n0006<1> Madd__n0006_inst_lut2_01 Madd__n0006_inst_cy_0 Madd__n0006_inst_cy_1 CLB_R27C2.S0.CIN net (fanout=1) 0.000 Madd__n0006_inst_cy_1 CLB_R27C2.S0.COUT Tbyp 0.096 _n0006<2> Madd__n0006_inst_cy_2 Madd__n0006_inst_cy_3 CLB_R26C2.S0.CIN net (fanout=1) 0.000 Madd__n0006_inst_cy_3 CLB_R26C2.S0.COUT Tbyp 0.096 _n0006<4> Madd__n0006_inst_cy_4 Madd__n0006_inst_cy_5 CLB_R25C2.S0.CIN net (fanout=1) 0.000 Madd__n0006_inst_cy_5 CLB_R25C2.S0.COUT Tbyp 0.096 _n0006<6> Madd__n0006_inst_cy_6 Madd__n0006_inst_cy_7 CLB_R24C2.S0.CIN net (fanout=1) 0.000 Madd__n0006_inst_cy_7 CLB_R24C2.S0.COUT Tbyp 0.096 _n0006<8> Madd__n0006_inst_cy_8 Madd__n0006_inst_cy_9 CLB_R23C2.S0.CIN net (fanout=1) 0.000 Madd__n0006_inst_cy_9 CLB_R23C2.S0.COUT Tbyp 0.096 _n0006<10> Madd__n0006_inst_cy_10 Madd__n0006_inst_cy_11 CLB_R22C2.S0.CIN net (fanout=1) 0.000 Madd__n0006_inst_cy_11 CLB_R22C2.S0.COUT Tbyp 0.096 _n0006<12> Madd__n0006_inst_cy_12 Madd__n0006_inst_cy_13 CLB_R21C2.S0.CIN net (fanout=1) 0.000 Madd__n0006_inst_cy_13 CLB_R21C2.S0.COUT Tbyp 0.096 _n0006<14> Madd__n0006_inst_cy_14 Madd__n0006_inst_cy_15 CLB_R20C2.S0.CIN net (fanout=1) 0.000 Madd__n0006_inst_cy_15 CLB_R20C2.S0.COUT Tbyp 0.096 _n0006<16> Madd__n0006_inst_cy_16 Madd__n0006_inst_cy_17 CLB_R19C2.S0.CIN net (fanout=1) 0.000 Madd__n0006_inst_cy_17 CLB_R19C2.S0.Y Tciny 0.545 _n0006<18> Madd__n0006_inst_cy_18 Madd__n0006_inst_sum_19 CLB_R17C3.S0.F3 net (fanout=1) 1.094 _n0006<19> CLB_R17C3.S0.CLK Tick 1.352 cnt<19> _n0002<19>1 cnt_19 ------------------------------------------------- --------------------------- Total 8.337ns (5.443ns logic, 2.894ns route) (65.3% logic, 34.7% route)Article: 73723
Rick, This issue is (was) new: the ESD protection of the Vcco pins was firing on a high (very fast) dV/dt. Later mask sets got fixed, but some early mask sets are still in production with this restriction. Just ramp on slower than that indicated in the data sheet, and everything is fine. The protection is an active clamp (not a SCR), but it still 'latches,' removal of power is required to reset it. The circuit can not be triggered in normal operation, as it is only used on the Vcco pins, not the IO pins themselves. This is NOT the power on current issue that was in Virtex, Virtex E, Spartan 2, and Spartan 2E as you describe it. In thoses cases, the current must be supplied to start up the device. It may have acted like an SCR or clamp, but the mechanism was completely different. As well, all of those devices were characterized, and production screens put in place so that we guaranteed start up if there was at least the amount of current specified in the data sheet present for Vccint. Subsequent to Virtex E, we designed out the current surge issue for the core. (VII, and all later parts do not have an issue with Iccint at startup.) Austin rickman wrote: > Brian Davis wrote: > >>Steven Knapp wrote: >> >>>The limitation is if ... >>> >>>* The VCCO supply ramps faster than the minimum >>>data sheet specification (Tcco) >>> >> >> One question: is this strictly a power-up issue, or can it be >>triggered during operation? >> >> In particular, since the ramp spec. is worse for the leaded >>packages, could a large transient on the VCCO supply cause >>the same problem? >> >> e.g., if you configure a PQ208 with many parallel DCI >>terminations, at the end of configuration the VCCO supply will >>jump instantly from quiescent to full power ( maybe ~3 amps max. >>for a PQ208, but by that point you'd have heatsinking problems ) > > > I don't think this is a problem. The ramp speed issue in only of > concern at a voltage threshold around 0.8~1.0 volts. As it was > explained to me, when the part powers up, there are a lot of transistors > which are turned on initially. Once Vcc gets above about 1.0 volts, > everything is biased and the transistors that need to be off are off. > But the part draws a lot of current in the meantime as the voltage > ramps. If the voltage ramps too fast, the PS can max out on current and > for some reason, this will disturb the part and it will not initialize > correctly to the point that a power down must be done to correct the > problem. > > So the problem is that you must let the part draw as much current as it > needs as the voltage ramps up. The spec is to let you know how much > current you will need for a given ramp rate. Keep the ramp rate slower > than the worst case spec and the part will be happy with the current > spec'd in the data sheet. After the voltage rises above about 1.0 volts > this is no longer an issue regardless of how the spec is written (or > interpreted). >Article: 73724
Guy wrote: > Let me first say thank you for your responses. > To address some of the questions / comments: > I realize that not everyone will extract the same application or value > from the on-chip NV memory, however, since it has the potential to > provide or support different applications, general enough value may be > justified for inclusion. Since this is Horizon gazing, what about looking into FPGA+MRAM - then you can offer SRAM to all users, and do not have to do a RAM/OTP die trade off - plus it is much easier to explain to users. [FPGA designers are not always the most hardware literate :) ] For an example of MRAM see http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=MR2A16A&nodeId=015424 > Answers / Comments for Jim and Hal: > a. bad bits: built in ECC would make bad bit transparant to the user What about a bad row/column select line - or is enough fringe test memory included to test all array access lines ? > > b. speed = assume approximately 25ns access time Any Icc projections ? [ROM is usually quite low power] > > c. width = flexible, 16/32/64 bit Why stop at 64 wide ? One edge on-chip memory gives is 'free width', and that can translate to lower power. > d. secure code storage would come by two assumptions: i) on-chip > processor preventing external need to access the memory. ii) readback > via JTAG etc of Memory would be programmed as disabled by the user > > e. on-chip charge pumps require no special external voltage supply > > f. user application would be able to log/write data to NV memory for > storage Any values/estimates on write times / write energies / Block sizes ? What about Read-While-Write - which is a common drawback in FLASH. > g. external applications could read NV data too, but you obviously > loose security But selectively, one would hope ? > > h. OTP = nonerasable > > i. no tradeoff with NV and Ram like functionality - it would be a > standard feature, not swappable block within the silicon family I was meaning more Software than Hardware - the issue is development, and early production, where users need to not have ROM. That may mean a bigger device (with the extra SRAM), and a re-compile, or external NV memory. There is also potential here for power saving, as external memory will always be width constrained to save pins, plus have all the BUS capacitance. Internal ROM can be wider, so use a lower clock for the same BUS bandwidth. > > j. yes for non secure, it would enable integration into the FPGA of > on-board NV data for some applications (mature s/w code for example). > Realize that also for some applications, you can create a sense of > "virtual" unlimited multiwrite capability. To demonstrate what I mean > by Virtual multi-write, I'll use the following example. Let's say > that you know you may need to write 10,000 maximum lifetime events at > 100 bits data each but do not need to keep history for more than 16 > events. As a designer, you could buy a tiny block of flash or EEPROM > of 1.6kbits and keep writing over the older events. Or, with OTP, you > could simply purchase enough OTP memory to store the entire lifetime > worth of events. Many applications that store NV data can quantify > the lifetime of storage from a practical specification. One example, > Televisions need to store user configuration (like: color, tint, > favorite channels etc), and need to provide the user with unlimited > adjustments of this data. However if you make some assumption, the > design can calculate an upper limit of needed storage. For example, > lets say 512bits stored, TV life is 20 years. I would venture to > guess that if you assumed that data storage would occur Max once a day > for each of 20 years, you would be happy to remove the $1 EEPROM from > the board as long as the on-chip cost of the NV block was less. > 20yr*512b*365days = 7Mbit total > > > Any other creative thoughts on how this could be used would be > appreciated. <paste> Nicholas Weaver wrote: > There is already a more secure mechanism for this: SRAM-based > encryption keys used to load encrypted bitfiles. That mechanism can > be used to bootstrap a large non-volatile store, with a keystone of > the SRAM-based encyption key which is probably significantly harder to > reverse/crack than on-chip static bits. > > Thus the "savings" by putting it on-chip are not security, but the > cost savings of not needing a large external Flash PROM. That's true for bitstreams, but not as easy for external code. I can see an application where the user defines a 'Rom BUS Scrambler' that is used to load the external memory, and then reverse the scramble on read. See the Dallas secure Microcontroller families. Boot load code could be 'password zipped', where more time is tolerated to unpack, and shift to the external memory. So you get medium levels of security, with low cost (?) silicon, and not needing too special design flows. -jg
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z