Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi Antti, > > > if in hurry then you must either use TQFP100 package or BGA Finally I think I wont use PLD at all. A RC filter for "quick & dirty" pre-debounce and a simple 8bit µC to handle to rest of the logic will do. It doesn't need to be very fast anyway. SylvainArticle: 86501
"Sylvain Munaut" <com.246tNt@tnt> schrieb im Newsbeitrag news:42c27a2c$0$29635$ba620e4c@news.skynet.be... > > Hi Antti, > > > > > > > if in hurry then you must either use TQFP100 package or BGA > > > Finally I think I wont use PLD at all. > A RC filter for "quick & dirty" pre-debounce and a simple 8bit µC > to handle to rest of the logic will do. It doesn't need to be very > fast anyway. > > > Sylvain HAHA, that is good optimization!!!! from 500LE down to 0! with proper firmware you may not need the external debounce but it all depends sometime the rc network is good choice sure if there is no direct need for high speed logic then small flash micro is better choice. my current favorite is ATmega8 in QFN32 package, but thats all a matter of taste AnttiArticle: 86502
Thanks for the comments (Andrew & JJ). Let me explain a bit. You asked for this! :-) I don't need an FFT because I don't need to multiply numbers. The sieving can be performed on 'small' integers (64-bits gives a very large scope) using addition, subtraction, left shifting by one bit and comparisons. No multiplication and no floating-point makes it much easier. Proth sieving has some unique characteristics that make it different from the Mersenne primes research. The main goal is to maximise numbers sieved based on time. This splits into two seperate goals:- - Maximising the speed of each individual siever. - Maximising the number of sievers on the FPGA. Obviously pipelining may be required to complete one iteration of the loop within one clock cycle, but then that may increase the number of slices used by each siever, bringing down the number of sievers that I can place on the FPGA. I'm playing around with a Spartan-3 starter kit as that is the only platform I have for development. This is a hobby project (not University project), if I can get reasonable performance then I may consider moving up to a larger FPGA with more gates (to get more sievers). I'll only ever be working with dev kits. I don't intend (unless the performance is stunning) that I'd ever consider to build anything with multiple FPGAs, although my inner-geek would love to have a box with lots of blinkenlights and serious amounts of processing power. The 50MHz comment was purely my brain not working. I had glanced at the description and the mentioning of a 50MHz oscillator had stuck in my brain. I then assumed that the final solution would be limited to 50MHz. I've no idea how quick an XC3S200 can actually run, or would run with this 'design'. (could anyone give a rough guess?) As for the actual problem I want to solve, here's a 'quick' description of Proth sieving and why I want to do it. Proth numbers are numbers of the form k*2^n+1 where 2^n > k. k should be odd (for obvious reasons). A Sierpinski number (of the second kind) is a value of k so that k*2^n+1 is composite for all k. k=78557 will always produce composite numbers for any positive n you pick. This has been proven. To prove that a k is not a Sierpinski number one must find a value for n such that k*2^n+1 produces a prime number. The Sierpinski conjecture is that 78557 is the lowest value of k possible. Of the 39278 other odd candidates, primes have been found for all but 9. The conjecture is regarded as true by most people and so primes for the other 9 k-values are expected to be found. There's more on the Sierpinski problem here:- http://www.seventeenorbust.com/ (That URL looks a bit suspect but trust me on this. There used to be 17 candidate k-values but the above project has found primes for 8 of them.) The above site is running a distributed attack (a la SETI, Cancer@Home, etc) to find primes. The numbers involved are extremely large. The most recent prime was found for k=27653 at n=9167433. That's 27653*2^9167433+1, a nice 2,759,677 digits in decimal. Trying to work with those kinds of things on a FPGA would be madness. Primality of these numbers can be proven with a Proth Test. http://mathworld.wolfram.com/ can be useful too if you want to read up any more on this. Trying to prove primality of 2.7 million digit numbers is not an easy task. It can take a month on a 3GHz Intel P4. The equation involved is: let N=k*2^n+1 and if:- a^((N-1)/2) mod N = N-1 then N is prime. The value of a used is usually the lowest odd prime that is not a factor of k. Since the remaining k-values being tested are all prime then a=3 usually works. Now consider the info above. In the above case N is a 2.7 million digit number. (N-1)/2 may have the same, or one digit less than N. Still an insanely large number. 3^99 (a 2 digit number) results in a 49 digit number. 3^999 (a 3 digit number) results in a 478 digit number. Now consider 3^N where N has 2.7 million digits. Big. Anyway, on to the sieving. So the project has produced a large number of k,n pairs that need to be tested. Each Proth test takes a long time. Sieving is the process of removing candidate numbers by trial division with known primes. For example, one of my computers is checking the range of primes from 882000000000000 to 883000000000000 and, amongst others, it's found that 882347066940847 is a factor of 24737*2^80174983+1. It has found 17 unique factors in the week that it has been running. That's 17 k,n pairs that do not need to be proth tested. Those proth tests would have run for 17 months on the same machine only to not find any primes. This is the value of sieving. OK, so an FPGA won't be good at dealing with large (i.e. 10 million bit) numbers. So why am I here clogging up this newsgroup with my ramblings? It's because Proth numbers (k*2^n+1) have some properties that make sieving easy. Say I'm checking when k=10223 and my prime of choice is p=882319864863617. I compute the remainder of (k*2)+1 mod p. Which, when testing such large primes, is simply (k*2)+1. r = 10223*2+1 = 20447 (that's for n=1) Starting with (k*2^n)+1 - subtract the 1 we added on =k*2^n - double the number (k*2^n)*2 = k*2^(n+1) - add one = (k*2^(n+1))+1 More importantly, these actions can be performed on the remainder without having to keep track of the actual value of k*2^n+1. So:- r_next = (2*(r-1))+1 mod p = 2r-1 mod p The 'mod p' part is quite expensive. Since we are only dealing with a doubling of the previous r (which we knew was guaranteed to be less than p) we know that, at most, r could grow to less than 2p. This is why 'mod p' is replaced by a simple subtraction of p if r >= p. If r is ever equal to 1 then it will always remain 1, we can stop processing (for the FPGA I wouldn't even bother passing this on as a work unit). If r is ever equal to 0 then it means that p is factor of k*2^n+1 at whatever n the algorithm has iterated to. We can stop here and record the value of n. So with relatively little effort we can check for factors for very large values of n without having to keep track of the large (multi-million digit) values. The snag is that we have to compute r for every n along the way. This is why I can't run a coarser run by skipping values. Each iteration must be performed and the value of r checked. There is no way to predict future values of r without computing them. There's another stop condition, if we ever get back to the r value we started at (when n=1) then we know we can stop. The r values will just follow the same pattern again. At this point I'd like to stop and record the fact that no factor was found (otherwise I would have stopped before) for this k,p pair. The modulus operation involved has an important bearing on the size of this loop. Say I'm checking a number against p=3. Consider all of the possible values of r and what they transform to:- 2x-1 mod 3 0 -> (2*0)-1 mod 3 == 2. 1 -> (2*1)-1 mod 3 == 1. 2 -> (2*2)-1 mod 3 == 0. The maximum number of values r can have is p, but r=1 is a special case, so the maximum loop size would be p-1. Some values of p can produce smaller sub-loops. 2x-1 mod 7: 0-> (2*0)-1 mod 7 == 6. 1-> (2*1)-1 mod 7 == 1. 2-> (2*2)-1 mod 7 == 3. 3-> (2*3)-1 mod 7 == 5. 4-> (2*4)-1 mod 7 == 0. 5-> (2*5)-1 mod 7 == 2. 6-> (2*6)-1 mod 7 == 4. 0->6->4->0 1->1 2->3->5->2 So if my initial value of r had been 2 I could loop endlessly without ever finding a factor (r=0). This is why it's important to check to see if we've returned to the initial value of r. The sieving effort involved in the SoB project above has put an arbitrary limit on the upper bound of n of 50 million. Proth testing a k,n pair where n~50M would take a long long time, but sieving up to that range is relatively cheap. If, two years down the road, the project is looking at n values up at that range we'll be thankful that we've sieved up there. As I said above, the maximum size of the loop is p-1. So for p=882319864863617 this would mean a maximum loop size of 882,319,864,863,616. This is why 99.9% of the runs will completely iterate up to n=50,000,000. It's actually closer to 99.999999%. From the numbers above there is a 1 in 17,646,397 chance that p will be a factor of k*2^n+1 with n between 1 and 50,000,000. (This isn't strictly true due to the possibility of smaller sub-loops, but gives you an idea of the numbers involved). Also, by recording the value of r when I do reach the end of the loop it means I can continue this at a later date. I'd only need r at n=1 (trivial to compute) and the value of r at a given n and I can continue on my merry way. The faster we can sieve, the faster we can rule out more and more k,n pairs. The less pairs we have the faster sieving becomes (although only slightly) and so we can sieve out higher and higher primes. *THIS* is why I'm interested in an FPGA helping out. They have the capability of being much much faster than traditional CPUs. If you've read this far then thank you. Reward yourself with a coffee/tea/walk/cigarette/nap/whatever. Ta, -AlexArticle: 86503
Hello I use a Virtex-II Pro with PowerPC at 300 MHz, 8 kB IOCM, 32 kB DOCM and external 32 MB SDRAM (connected on PLB ) When I read 10 times 32 MB on my SDRAM, that takes 3.7'' and when I write the 320MB on the SDRAM it takes 9.6'' without burst support and 6" with burst support. Did someone knows why the read rate is 85 MByte/s for writting a SDRAM and maximum 53 MB/s for reading ? Normally, with 64 bits PLB, it should be much more ? PierreArticle: 86504
Hello I use a Virtex-II Pro with PowerPC at 300 MHz, 8 kB IOCM, 32 kB DOCM and external 32 MB SDRAM (connected on PLB ) When I read 10 times 32 MB on my SDRAM, that takes 3.7'' and when I write the 320MB on the SDRAM it takes 9.6'' without burst support and 6" with burst support. Did someone knows why the read rate is 85 MByte/s and 53 MB/s(maximum ) for writing? Normally, with 64 bits PLB, it should be much more ? PierreArticle: 86505
"David Brown" <david@westcontrol.removethisbit.com> wrote in message news:d9tjh9$lpo$1@news.netpower.no... > Kolja Sulimma wrote: >> Hmm. As I understand it, if the algorithm is known, a skilled attacker >> will find the key with differential power analysis within hours if he >> can take the board to the lab. > I'm not an expert (I'm not even an amateur) in cryptography, but can you > not fool differential power analysis by adding extra logic to cause > unpredictable switching to swamp any useful power analysis? If it is random noise, then one could collect many samples of the same cryptographic operation and average them out. The noise averages toward zero, the signal increases with more samples. So it puts the cryptanalyst to more trouble but not insurmountable. In DAC chips, I noticed that bits would switch currents into the output circuit or a 'dumping' load. This is done so there is a constant current load on the power rail and thus less noise to affect other bits. Ideally a crypto device would do the same thing. However, I should point out that all current FPGA chips are unsuitable for cryptography where a skilled attacker can get hold of it. They are very regular in structure, relatively easy to probe, and not built with anti-analysis features. Modern smart card chips are designed to make attack harder. Key bus and control lines might be buried so one has to etch as well as probe, data storage physically jumbled, and so on. They can still be analysed given enough time and money, but the aim is to make this more expensive than what the attacker can gain from the process. That said, the Cambridge company N-Cypher uses FPGA chips in its web-encryption products. These sit in the service provider's locations, so they are more secure than being out in the field. K.Article: 86506
Antti Lukats wrote: > [...] > AR21127 says that if the V4 is powered but not configured the DCM > performance will start to degrade (there are actuall changes of the > silicon!), but this process is reversable, like self healing so if the V4 is > later powered and configured for the same amount of time then DCM > performance will slowly be restored to be inside the specification again. Howdy Antti (or Austin), Could you point me to where you saw something official from Xilinx (unfortunately a newsgroup posting isn't official) about the DCM recovering its max frequency specs after being configured or unpowered for a period of time? All I can find is a description it degrading. BTW, the following answer record may be of interest to you and others: http://xilinx.com/xlnx/xil_ans_display.jsp?getPagePath=21435 Thanks, MarcArticle: 86507
Antti Lukats wrote: > "Austin Lesea" <austin@xilinx.com> schrieb im Newsbeitrag > news:d9rpon$lbe1@cliff.xsj.xilinx.com... > > Comment: > > > > Virtex 4 in the smaller parts (LX15, LX25, FX12) are all less than $100 > > (based on forward pricing in quantities, see the various press releases). > > > > If this is a hobby project, then you are better off going to the Xilinx > > web store and buying a Spartan 3 (or buying the Digilent S3 pcb complete). > > Hi Austin, > > I think my estimate was more realistic, it is for the small volume project > (at least the way I understood the OP) and I did mean pricing as of today. > Sure the prices for the devices you mentioned do fall below $100 margin, > some day in some qty. But for buying small qty of V4 as of today I would > expect near 100$ pricing or am I mis understanding the pricing policy? I > dont remember seeing V4 price forecasts going much lower than $80 USD for > 5-10k yearly volumes. Sure I would welcome a lower pricing :) Howdy Antti, Agreed - the Xilinx press releases are near useless. Looks like you have to get into some decent quantity before the price drops below $100: http://groups.google.com/groups?q=xc4vfx12 MarcArticle: 86508
> Does it make sense to stick a circular buffer of size a bit bigger > than N at the top of the pipeline, and tag each entry with a short > quantity indicating where in the buffer its input is stored? In my case, the pipeline input is not stored at all. However, it will be needed at the moment the pipeline produces the output for that input. Any recommendations?Article: 86509
David Brown wrote: > >How will the master know that this byte is the last byte > > Can you be more specific on what you mean by this? (are you referring to the > SPD controller, or the state machine or microprocessor controlling it?) I am referring to SPD controller. > > >and what > > happens if the master acknowledges the last byte also. > > Referring to the I2C spec Page 10, section 7.2, "If a master-receiver is > involved in a transfer, it must signal the end of data to the > slave-transmitter by not generating an acknowledge on the last byte that was > clocked out of the slave. The slave-transmitter must release the data line > to allow the master to generate a STOP or repeated START condition" > > I don't really know what will happen with the slave device you are using if > you don't NACK the last byte, it might work just fine, or if the SPD puts > out one additional clock at the end of the ACK bit position to release the > ACK/NACK, the slave device would think the next byte should be read and > would assert the MSB of the next byte on the data line. If the MSB of the > next byte was a 0, then the master would not be able to apply a stop > condition (remember it's open drain or wire AND), and the system would > affectively be unsynchronized. The Master may believe it terminated the the > read sequence correctly, and the slave could still be driving the data line > low with it's MSB waiting to shift out the next bit for the byte. > Following communications to the slave device would fail, until the master > outputs a continuous 1 (0xFF) pattern to the device allowing the device to > eventually get a NACK and terminate the read operation. The master could > then attempt to repeat the sequence for a read operation.. > > Based on the spec. and the possibilities of what you can look forward to and > the corrective actions, I would recommend following it for compatibility > reasons. > > dbrown > > > > <praveen.kantharajapura@gmail.com> wrote in message > news:1118720948.402364.49090@g49g2000cwa.googlegroups.com... > > > > > > > > > > David Brown wrote: > >> One additional note. As mentioned below, the master should ACK each byte > >> that it reads, except for the last byte. The last byte should be nacked > >> prior to the stop condition. > >> > > > > How will the master know that this byte is the last byte, and what > > happens if the master acknowledges the last byte also. > > > > > > > > > >> dbrown > >> > >> "Gabor" <gabor@alacron.com> wrote in message > >> news:1118690295.803563.225360@g43g2000cwa.googlegroups.com... > >> > praveen.kantharajapura@gmail.com wrote: > >> >> Hi Gabor , > >> >> > >> >> Thanks for the reply. My EEPROM is write protected i will only be > >> >> reading the first 128 bytes, is this flow diagram all right. > >> >> > >> >> > >> >> 1-bit 8-bits > >> >> 1-bit > >> >> > >> >> Start from master --->> EEPROM Slave address("10100001") -->> ACK > >> >> from eeprom --->> > >> > > >> > Actually you need to start with write address "10100000" in order > >> > to write the address register (you don't need to write the EEPROM > >> > array so write protect doesn't matter). > >> > > >> >> > >> >> > >> >> 8-bits 1-bit > >> >> Write register address "00000000" --->> ACk from EEPROM > >> > > >> > Right here you need to switch to read mode. There are two > >> > ways to do this. Either master sends Stop followed by Start > >> > or master sends repeated start. If you intend to reuse this > >> > code for other peripherals besides EEPROM, you'll find the > >> > repeated start is compatible with more chips. > >> > > >> > Then you need to provide slave address "10100001" for read > >> > and get ack from slave then: > >> > > >> >> 8-bits 1-bit > >> >> --->> Data[0] > >> >> from EEPROM --->> ACK from master > >> >> > >> >> 8-bits 1-bit > >> >> > >> >> ................. --->>Data[127] from EEPROM --->> STOP from master > >> >> > >> >> I will generate the STOP condition after receiving 128 bytes. > >> >> > >> >> Any comments on this. > >> >> > >> >> > >> >> Regards, > >> >> Praveen > >> >> > >> >> > >> > Also you talk about "bits" when you send start and stop. These > >> > conditions > >> > do not toggle the SCL line so they are not usually counted as "bits" > >> > as would be data or Ack cycles. > >> > > >> > Regards, > >> > Gabor > >> > > >Article: 86510
Alex wrote: > Thanks for the comments (Andrew & JJ). Let me explain a bit. > You asked for this! :-) > > I don't need an FFT because I don't need to multiply numbers. > The sieving can be performed on 'small' integers (64-bits > gives a very large scope) using addition, subtraction, left > shifting by one bit and comparisons. > > No multiplication and no floating-point makes it much > easier. > snipping all to spare the BW Well that was quite an interesting read and links too. Right now its still over my head since its not my hobby or work interest but still it points to the challenges in an FPGA design. But since I know the simple seive of eratosthenes very well I'll come back to this later as a possible cpu/memory benchmark. I think starting with the starter kit is an excellent idea, when you reach best case solution you can atleast speculate what bigger HW could do. On the other hand you could avoid this HW experiment and keep it in C for quite a while longer. When I model my cpu project, I don't use Verilog simulation, I write in a form of HDL C that allows the C model to run in std VC6 compiler on a std PC, this C RTL (register transfer level) code runs about 20x slower than simple minded C code that has no HW detail but it also runs orders faster than Verilog simulation. The C RTL code though looks so much like Verilog it can be hand translated with some grepping back into Verilog for the actual FPGA synthesis. This might not be so easy for VHDL since its syntax is not like C. You could reasonably experiment with C RTL engines that are orders magnitude larger than any FPGA could actually handle and you still have the PC memory and disk resources. So far you are going backwards by about 20x if you do what I suggest, but then with a simple C RTL translation effort you have a Verilog that can be synthesized into any FPGA provided you understood the architecture of the FPGA. Now with the C RTL model you can have your simulator count cycles and HW costs without ever going to HW. But you will need to port the C RTL to Verilog to find out what your clock freq limit is going to be. What I like about this approach is that lots of architectures can be explored and with my HW exp in hand I know what to expect when I go to synthesis. Exploring architecture is relatively painless in your fav C IDE, but you can evolve the arch towards one that just flies through the FPGA tools. Spending alot of time in Webpack trying to get optimal results leads to great frustration. You might also look at HandelC but $$$. For the long term, I still think big DRAM is going to be a big heasdache since I see poor locality of reference written all over this project, ie SRAM cache will be useless. You could take a look at Micron's RLDRAM which performs like a DRAM in density and like an SRAM in speed. It does this by allowing accesses to start every 2.5ns, with 8 in flight and data accesses come out after 20ns or 8 clocks. If you can arrange for interleaved engines to sync with this, you can get 32MByte RLDRAM effectively working at 2.5ns but you need good HW pipeline design, and this is high end FPGA stuff ie V2Pro and above. These accesses can be fully random to any address provided the bank is already done with previous job. For spartan3 best grade, webpack 6.3 was giving me 225MHz or so for a 16bit cpu datapath with the 16bit adder and dual port blockram both close to the limit. I use 2 clocks to get a 32bit datapath with 4way ported BRam. Others here may have gotten better results. I mostly work on V2Pro about 1.5x faster, and it seems webpack 7.1 has slowed things down some (I dunno why). If you start adding numbers >16b unpipelined, your perf will go to hell pretty quickly, you need to pipeline either at the 16b or 32b add level. best of luck to this project regards johnjakson at usa dot com ps If you like I can send you C RTL sample code.Article: 86511
Alex, I think there is a nice book on this subject written by Best. Here is its full name: R. E. Best. Phase-Locked Loops Design Simulation and Applications. The McGraw-Hill Companies, New York, New York, 1999 What is the maximum NRZ data rate? If it's small enough to be oversampled by higher frequency clock, there could be an alternative implementation. You can use a self-resettable NCO which is reset to a seed FTW each NRZ posedge / negedge. Hope this helps. Vladislav "Alex" <jovajsha@yahoo.com> wrote in message news:4Stwe.1813$R5.506@news.indigo.ie... > Hi, > > I am trying to find some literature of how to design All Digital PLL which > would extract clock from NRZ signal. The main problem here is that zero > crossing is not present every bit. Also, does anyone know how to set > digital > filter parameters (in practice) based on loop bandwidth, tracking band and > parameters like that? Thanks. > > Alex > > >Article: 86512
I am using the Quatech SPP-100 PCMCIA parallel port emulator with a Dell M70. Everything looks great. When I program the PROM (XCF08S) it programs and verifies successfully. The FPGA does not get programmed correctly however by the PROM on power up. Then I tried programming the FPGA, Virtex 4 FX12, and same thing, programs and verifies successfully but does not run. Then I tried programming the FPGA without verification. Then it works fine. I tried the PROM without verification with no success. It seems the only thing that I can do is program the FPGA without verification. By the way the same program runs fine when programmed from a different machine with a dedicated parallel port. Anyone seen this before and if so any possible solutions? Cheers, irishArticle: 86513
Hi, in my simulation I have a dual port Ram (from QuartusII v.5.0) which is initialized to all zeros with a hex file. My problem: After some write operations into the RAM (32 write positions are provided by a FIFO module) I want to reset my design in my VHDL simulation (Modelsim) because the FIFO is empty. The present state of design does not allow to put back the write positions into my FIFO. After the FIFO is empty I could reset my design within my testbench. But the contents of the RAM should also be resetted or rather be initialized again with zeros. I could step through my RAM and write at each address zeros into it but that takes too long for simulation. The hex file seems to have a bearing on initialization only at the beginning of the simulation. Is there some possibility to initialize the RAM during simulation by means of the hex file ? Any suggestions are appreciated. Rgds Andr=E9Article: 86514
Hello I have a strange problem. I am trying to implement a decoder as a plb peripheral. The problem appears when I am trying to synthesis the core. Sometimes it synthesises properly using up 34 block rams. But sometimes, it doesn't. Can anybody give any reasonable explanations for this. I am using Xilinx XPS for synthesising and downloading. I have the IMP_NETLIST = TRUE in my .mpd file. JoeyArticle: 86515
Thanks Vladislav, I will browse net for your suggestions. BTW, NRZ in this case can and will be oversampled by 16 times. Alex "Vladislav Muravin" <muravinv@advantech.ca> wrote in message news:YYwwe.8234$mK5.579394@news20.bellglobal.com... > Alex, > > I think there is a nice book on this subject written by Best. > > Here is its full name: > R. E. Best. Phase-Locked Loops Design Simulation and Applications. The > McGraw-Hill Companies, New York, New York, 1999 > > What is the maximum NRZ data rate? If it's small enough to be oversampled by > higher frequency clock, there could be an alternative implementation. > You can use a self-resettable NCO which is reset to a seed FTW each NRZ > posedge / negedge. > > Hope this helps. > > Vladislav > > "Alex" <jovajsha@yahoo.com> wrote in message > news:4Stwe.1813$R5.506@news.indigo.ie... > > Hi, > > > > I am trying to find some literature of how to design All Digital PLL which > > would extract clock from NRZ signal. The main problem here is that zero > > crossing is not present every bit. Also, does anyone know how to set > > digital > > filter parameters (in practice) based on loop bandwidth, tracking band and > > parameters like that? Thanks. > > > > Alex > > > > > > > >Article: 86516
I am new to this group and new to FPGA. I am working with XCV2000E fpga. Tool i am using for synthesiz is Xilinx ISE 6.2i. I am working with logic module provided by ARM. I need to interface a push button switch provided in the board. But whenever i use the edge of this signal for triggering in the code(i am using verilog) xilinx tool assumes it as a clock signal and ties it to default pins provided in the FPGA. But this pin is difffernt from the push button input. How can i reassign the input to the push button switch. How can i disable the clock bufferenig in the tool. One method i found is change in the code. insted of directly clocking with push button i passed it through an and gate. This removed buffring of the push button. Eg: insted of always @(posedge pbut) i used assign temp_clk = pbut & pbut_enable; always @(posedge temp_clk) where pbut_enable is an external signal and assigned LVTTL type to it (this i hope when unconnected will keep the pbut_enable at "high" state"). Will this method work. Anyway i want to know about clock disabling in the tool Hope you people will help me. Thank you.Article: 86517
Vladislav, Could you please explain to me what did you mean by "reset to a seed FTW"? Many thanks. Alex "Vladislav Muravin" <muravinv@advantech.ca> wrote in message news:YYwwe.8234$mK5.579394@news20.bellglobal.com... > Alex, > > I think there is a nice book on this subject written by Best. > > Here is its full name: > R. E. Best. Phase-Locked Loops Design Simulation and Applications. The > McGraw-Hill Companies, New York, New York, 1999 > > What is the maximum NRZ data rate? If it's small enough to be oversampled by > higher frequency clock, there could be an alternative implementation. > You can use a self-resettable NCO which is reset to a seed FTW each NRZ > posedge / negedge. > > Hope this helps. > > Vladislav > > "Alex" <jovajsha@yahoo.com> wrote in message > news:4Stwe.1813$R5.506@news.indigo.ie... > > Hi, > > > > I am trying to find some literature of how to design All Digital PLL which > > would extract clock from NRZ signal. The main problem here is that zero > > crossing is not present every bit. Also, does anyone know how to set > > digital > > filter parameters (in practice) based on loop bandwidth, tracking band and > > parameters like that? Thanks. > > > > Alex > > > > > > > >Article: 86518
I could run the 2.6 kernel without PCI support on the ML 310 board ..But I havent written any document on that ..Let me know in case u need help.. -- Parag BeerakaArticle: 86519
Joey wrote: > Hello > > I have a strange problem. I am trying to implement a decoder as a plb > peripheral. The problem appears when I am trying to synthesis the core. > Sometimes it synthesises properly using up 34 block rams. But sometimes, it > doesn't. Can anybody give any reasonable explanations for this. A little more info would be helpful... What happens if it doesn't synthesize "properly"? Any error messages? cu, SeanArticle: 86520
Kolja, No one has proven that DPA can crack the decryption on the V2, V2Pro, nor V4 .... yet. And it isn't for the lack of trying. So, the challenge remains: Can DPA crack 3DES or 256AES as implemented in V2, 2Pro, or V4? Austin Kolja Sulimma wrote: > Piotr Wyderski schrieb: > >>Hello, >> >>I would like to build a 1GBit/s data encryptor/decryptor using >>an FPGA chip, but I have a big problem with an appropriate >>chip. It should contain about 3000LE, 70 IO pins and at least >>12 dual-port RAM blocks (I need two read ports per block) configurable >>as 512x8 banks. Additionally, it should be Flash-based or SRAM >>-based with encrypted bitstream. And must be cheap. Here are the >>options I know of: > > > Hmm. As I understand it, if the algorithm ist known, a skilled attacker > will find the key with differential power analysis within hours if he > can take the board to the lab. > So you do not gain much by an encrypted bitstreams or even a OTP part. > You only prevent amateurs from breaking your chip, but they can always > pay an expert. > > You can ofcourse invent your on algorithm and keep it secret, but I > doubt that this will be any more secure. > > The only way to be safe is to make sure the key never ands up in a > hostile lab, for example by carrying it around on a keychain. > > Kolja SulimmaArticle: 86521
Kryten, I disagree: Virtex 4 FPGAs may be the only really secure means of developing a good crypto application. What can be more regular than an ASIC? It is all there, every bit of information, to be sliced and diced. With an encrypted bitstream in an FPGA, there is an additional level of encryption that makes it even harder to crack. Not only is the design a secret, and its implementation, but the attacker does not even know where any wires of interest, or any bits of interest are being kept. Additionally, security minded companies do not need to have a "secure foundry" build their chips: any foundry that makes the FPGA 'could' modify the masks, but that can be easily detected by comparing the FPGA with the original mask set (simple inspection). Attacks of DPA, flipping individual bits with laser/particle beams, etc. all fail due to the immense complexity of the problem. The FPGA with a billion transistors is not a stupid 'smart card' running a simple 8 bit uP program than can be easily spoofed. I have challenged, and continue to challenge anyone to crack the FPGA security (only by breaking it can we make it better). To that end, we still have some USB V2 Pro Logic Vault cards that are available for serious security hacking. Austin Kryten wrote: > "David Brown" <david@westcontrol.removethisbit.com> wrote in message > news:d9tjh9$lpo$1@news.netpower.no... > >>Kolja Sulimma wrote: > > >>>Hmm. As I understand it, if the algorithm is known, a skilled attacker >>>will find the key with differential power analysis within hours if he >>>can take the board to the lab. > > >>I'm not an expert (I'm not even an amateur) in cryptography, but can you >>not fool differential power analysis by adding extra logic to cause >>unpredictable switching to swamp any useful power analysis? > > > > If it is random noise, then one could collect many samples of the same > cryptographic operation and average them out. > > The noise averages toward zero, the signal increases with more samples. > > So it puts the cryptanalyst to more trouble but not insurmountable. > > > In DAC chips, I noticed that bits would switch currents into the output > circuit or a 'dumping' load. > This is done so there is a constant current load on the power rail and thus > less noise to affect other bits. > > Ideally a crypto device would do the same thing. > > However, I should point out that all current FPGA chips are unsuitable for > cryptography where a skilled attacker can get hold of it. > They are very regular in structure, relatively easy to probe, and not built > with anti-analysis features. > > Modern smart card chips are designed to make attack harder. Key bus and > control lines might be buried so one has to etch as well as probe, data > storage physically jumbled, and so on. They can still be analysed given > enough time and money, but the aim is to make this more expensive than what > the attacker can gain from the process. > > > That said, the Cambridge company N-Cypher uses FPGA chips in its > web-encryption products. > These sit in the service provider's locations, so they are more secure than > being out in the field. > > K. > >Article: 86522
Marc, Contact your FAE for information. The information packet on this subject states the shift in Vt is reversable. Austin Marc Randolph wrote: > Antti Lukats wrote: > >>[...] >>AR21127 says that if the V4 is powered but not configured the DCM >>performance will start to degrade (there are actuall changes of the >>silicon!), but this process is reversable, like self healing so if the V4 is >>later powered and configured for the same amount of time then DCM >>performance will slowly be restored to be inside the specification again. > > > Howdy Antti (or Austin), > > Could you point me to where you saw something official from Xilinx > (unfortunately a newsgroup posting isn't official) about the DCM > recovering its max frequency specs after being configured or unpowered > for a period of time? All I can find is a description it degrading. > > BTW, the following answer record may be of interest to you and others: > http://xilinx.com/xlnx/xil_ans_display.jsp?getPagePath=21435 > > Thanks, > > Marc >Article: 86523
ALuPin@web.de wrote: > After the FIFO is empty I could reset my design within my testbench. Yes, just push more data in using the testbench. > But the contents of the RAM should also be resetted or rather > be initialized again with zeros. That's not what will happen on the bench. A fifo can only access data it has pushed in. You want 'U's elsewhere to make sure your head and tail counters are working ok. -- Mike TreselerArticle: 86524
"Austin Lesea" <austin@xilinx.com> wrote in message news:d9uchh$ico1@cliff.xsj.xilinx.com... > With an encrypted bitstream in an FPGA, there is an additional level of > encryption that makes it even harder to crack. Not only is the design a > secret, and its implementation, but the attacker does not even know > where any wires of interest, or any bits of interest are being kept. FPGAs also give you yet another level of security here. For any given circuit functionality, and given sufficient timing/size margin, there are billions of different placement and routing combinations that will do the same thing. One could quite easily automate the process of creating a new, individual bitstream for each device to be shipped - particularly in Piotr's case, since he is talking of very small production runs. This technique doesn't help you against an attack on an individual device, of course. What it *does* mean is that cracking device X doesn't help you to crack device Y as well, because all the wires and logic elements are in different places. You'd have to start again from scratch. The avalanche effect in the bitstream encryption makes each (bitstream, device) combination into a completely unique problem. This would be reassuring to a customer wanting to buy a large number of these magical crypto chips. Cheers, -Ben-
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z