Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Ed McGettigan wrote: > Stephen Williams wrote: >>> 8) What exactly does this mean? >>> "The SystemACE driver is getting an error that the JTAG >>> configurator >>> was unable to read the configuration stream from the CF." >> >> These are the error, status and control register contents when the >> Linux kernel discovers the error: >> CONTROLREG=0x10a STATUSREG=0x19035e ERRORREG=0x2098 >> >> The Linux kernel is 2.4.33-pre1 w/ the mvista SystemACE drivers >> for Linux 2.4. >> >> The error messages from the kernel driver are: >> CompactFlash write command failed >> CompactFlash sector failed to ready >> CompactFlash sector ID not found >> JTAG controller couldn't read configuration from the CompactFlash > > I had a quick discussion with our Linux/UBoot expert and the SystemACE > designer as well as reading your original case that you filed with our > hotline and we believe that the SystemACE driver code that you are using > is resetting the SystemACE and causing another configuration of the > devices in the chain. (Thank you for actually *understanding* my situation! It's a relief.) > In the case of the single FPGA in the chain the SystemACE reconfiguration > may complete and return control to the MPU port before another MPU access > is made. While in the case of the two FPGAs in the chain the > reconfiguration > is still occurring when you attempt another MPU access. In any case you > should not be reconfiguring the devices a second time (unless you really > want to) > and our Hotline had given you instructions on how to prevent the > reconfiguration. If you are referring to the FORCECFGMODE and CFGMODE bits, note in the CONTROLREG dump that they are set according to the recommendations we received: FORCECFGMODE=1, CFGMODE=0, CFGSTART=0. I've put dumps of the CONTROLREG in several places in the driver and I do not see that those bits are being changed. I agree that it seems like the SystemACE thinks it needs to reload the FPGAs. The question is *why* does it think that. So far as I can see, the FORCECFGMODE should prevent that. I can't find anywhere in the Linux driver that this bit is being wiggled, and a dump at the crash point shows that it is still set up correctly. > If you would like to confirm this theory you can put a scope probe on the > CFG_TCK pin from SystemACE and you should see it actively toggle, stop for > a period of time after the 1st configuration and then restart again at some > point before stopping permanently. I will try to perform this test. >>> 9) It sounds like you filed a case with our hotline, what number were >>> you assigned? >> >> Case # 628407 >> (The webcase person asked none of these questions.)Our - -- Steve Williams "The woods are lovely, dark and deep. steve at icarus.com But I have promises to keep, http://www.icarus.com and lines to code before I sleep, http://www.picturel.com And lines to code before I sleep."Article: 101326
Hi, does anyone use Opencores PCI bridge with Altera devices ? My card works fine with Altera core, but i want to implement free one ;) I configured that as GUEST but just nothing happened - i mean - BIOS couldn't find it. Maby there's anything wrong with buffers on top schematic - its my first project in Quartus. If anybody can help me - i'll send my project to e-mail. thanks in advance, QuazarArticle: 101327
"Stephen Williams" <spamtrap@icarus.com> schrieb im Newsbeitrag news:FPidnYNfX-iWE8_ZRVn-hA@giganews.com... > Antti wrote: >> the answer is almost always: Yes/No >> >> all the in-system-programming and JTAG stuff is not as much standard as >> it could be. >> >> there have been many attempts to develop vendor neutral or at least >> multi-vendor technologies but all attempts have failed so far. >> >> its seems that big boys have big issues playing it nicely in a (common) >> sandbox, so almost all vendors have some 'special' things making the >> 'generic' things not fully useable. >> >> xilinx has XSVF a binary version of SVF, some Xilinx parts can not be >> programmed with standard SVF, >> lattice has SVF-Plus >> altera has its own flavors of JAM/STAPL >> actel has its own flavors of STAPL >> >> if you think a SVF player is a SVF player is a SVF player, then no it >> isnt, >> same for JAM/STAPL > > So the issue is not whether one count send a properly formatted > SVF file stream through a generic player and get a PROM/FPGA to > become programmed, but that one can't easily get that SVF string > in the first place. > > Bear with me, I really don't know. I just have in front of me a > printout of "Serial Vector Format Specification" and some wishful > thinking. > Hi Steve, its only wishfull thinking, :( compare the different source code revisions of xilinx xsvfplayer and svf2xsvf and lattice svf2vme and ispVM and you will understand, as with actel/altera and jam-stapl its not much better. there are of course fpgas and cases where 'generic SVF' would be ok, but all fpga vendors have devices where pure generic svf+player would not work :( anttiArticle: 101328
tomstdenis, http://www.xilinx.com/bvdocs/ipcenter/data_sheet/Helion_Standard_AES_AllianceCORE_data_sheet.pdf Shows some of the claimed clock rates for their AES encrypt/decrypt IP core. 257 MHz (V2 Pro) to 252 MHz (V4). Throughput in b/s is ~ 2 to 3 X the clock rate (per this datasheet). Other cores run just shy of 200 MHz. Other data from this same vendor makes claims of up to 20 Gbs for throughput of their 'fast' FPGA based AES encryptors and decryptors. At one time we made a 10 Gbs decryptor to prove that distributing full resolution theater real time movies could be done with one FPGA in the 'projector.' This prevents piracy by decrypting the movie at the projector itself (at no time is the full digital information available for copying). This was back in the Virtex II days, so the 20 Gbs claim is perfectly reasonable for V4 today (IMO). There are a number of other IP vendors with encryptors and decryptors for our FPGAs. http://xgoogle.xilinx.com/search?output=xml_no_dtd&ie=UTF-8&oe=UTF-8&client=iplocator&proxystylesheet=iplocator&site=IPLocator&filter=0&_ResultsView=Standard&num=25&q=aes&as_q=&getfields=*&newSearch=http://www.xilinx.com/xlnx/xebiz/search/ipsrch.jsp&formAction=http://www.xilinx.com/cgi-bin/search/iplocator.pl&IPCategory=&IPSubcategory=&sGlobalNavPick=&sSecondaryNavPick=&requiredfields=IPProducts&partialfields= or http://tinyurl.com/hajhj Austin tomstdenis@gmail.com wrote: > Paul Rubin wrote: > >>tomstdenis@gmail.com writes: >> >>>The typical AES core takes ~14 cycles to encrypt but in FPGAs normally >>>run at most at a couple hundred MHz at most [usually topping out >>>between 100 and 200Mhz at most]. 200Mhz is 13 times less than 2.6Ghz >>>which is equivalent to 182 cycles at 2.6Ghz. This is less than the 256 >>>cycles that the Opteron takes but only marginally so. >> >>I'd think if you're going to use such an expensive and exotic approach >>at all, you'd pipeline it to get one AES operation per cycle, maybe >>even more than one if you're doing something like EAX mode, or CTR >>mode ona large block in parallel. > > > Even with pipelining you're still on a fairly limited bus. At best you > top out at whatever the bus between the two actually is. Keep in mind > this is an FPGA and not ASIC. So chances are it won't clock that high > anyways. My 200Mhz quote is just a really really optimistic quote. >>From what I recall from my past job you'd be lucky to get something > complicated like a PDU clocking higher than PCI freq [33Mhz]. So while > you could get an AES core ~100Mhz it would only be doing CTR mode at > most. > > Block ciphers are not where this will shine. Specially when the other > processor is an Opteron. > > The trick to making good use of something like an FPGA isn't serial > speed. Even if you designed a custom RISC ALU on the FPGA it'd clock > probably around 50Mhz. Even with the best ISA you can craft for it the > Opteron could EMULATE the thing faster than you could run it. Where > the FPGA will shine is for tasks with a LOT of parallel computation. > Think like 16 FPU pipelines or a single cycle GF(2) multiplier, etc, > etc, etc. > > Other tasks where this would shine would be custom DSP filters, e.g. > offload MPEG work. A FIR or IIR filter of significant delay [e.g. > accuracy] could be constructed in a pipeline to get 1 sample/cycle at > decent clock rates. > > Tom >Article: 101329
Stephen Williams wrote: > Antti wrote: > >>the answer is almost always: Yes/No >> >>all the in-system-programming and JTAG stuff is not as much standard as >>it could be. >> >>there have been many attempts to develop vendor neutral or at least >>multi-vendor technologies but all attempts have failed so far. >> >>its seems that big boys have big issues playing it nicely in a (common) >>sandbox, so almost all vendors have some 'special' things making the >>'generic' things not fully useable. >> >>xilinx has XSVF a binary version of SVF, some Xilinx parts can not be >>programmed with standard SVF, >>lattice has SVF-Plus >>altera has its own flavors of JAM/STAPL >>actel has its own flavors of STAPL >> >>if you think a SVF player is a SVF player is a SVF player, then no it >>isnt, >>same for JAM/STAPL > > > So the issue is not whether one count send a properly formatted > SVF file stream through a generic player and get a PROM/FPGA to > become programmed, but that one can't easily get that SVF string > in the first place. Hmm.. I read what Antti said as more "Not all SVF strings are created equal.." > Bear with me, I really don't know. I just have in front of me a > printout of "Serial Vector Format Specification" and some wishful > thinking. Also, if the vendor uses non-binary SVF themselves, then you can expect valid files, but if they export SVF as some tick-box option, then expect lower yields... -jgArticle: 101330
Hi, I am the first time user of Xilinx ChipScope. Now the waveform file *.vcd outported from Xilinx ChipScope is in lab computer, and I want to review the same waveforms in my office before my computer. Is there any good way to do it? Thank you. WengArticle: 101331
Rickman - I have no bones to pick with you and I should know better than to dive into this pissing match between you and Austin, but from your posts on this group you come of to me as a perpetually exasperated and pissed off person who is probably not a barrel of laughs to be around. Case in point is all of your ranting posts about those darn S3 pullups. I don't have a problem with you being a little pissy now and then, but I do have a problem with you taking shots at anyone else's personality. Those in glass houses, bro'. By the way, I disagree with your assessment of Austin's on-line personality. He gets passionate, but unlike you is almost always curteous (except with Leventis from Altera). Rob "rickman" <spamgoeshere4@yahoo.com> wrote in message news:1145964671.096136.65230@t31g2000cwb.googlegroups.com... > As long as you brought up your personality, I'll just say that I > remember reading about someone who reminds me of you, Napoleon. He > thought a lot of himself too. > > Austin Lesea wrote: >> Rick, >> >> Well, I hope you keep up the comments. >> >> I am seriously trying to improve the documentaion process. >> >> If we explain it right the first time, we get less confusion, and we get >> to market faster. >> >> Really very simple, and very self serving: the better the docs, the >> less time wasted, and the faster our customers either gain success, or >> fail. The faster money changes hands. The faster we succeed (or fail >> to succeeed). >> >> It is so simple, yet so many (most) companies get it wrong. >> >> Just make it simple to succeed, and don't get in the way, or make things >> any tougher than they already are. >> >> As for my personality, my wife thinks I am completely impossible ('how >> can anyone work for you?'). My staff thinks I am the best boss they >> every had ('how can your boss deal with you?'). My supervisors love >> that I always come to them with a solution to the problem ('how do you >> keep your people so happy?'). My (totally grown) kids claim I am >> totally mad, and can not be trusted even for a moment ('that's not >> fair!'). My grandchildren sense I can be completely trusted to see to >> their well being and happiness (which I can be). >> >> So, I have a personality: guilty as charged. Member of the human race. >> >> Austin >Article: 101332
tomstdenis@gmail.com writes: > E.g. synthesize a MIPS core in the FPGA and map the DDR controller > on to it. Just one? Why not a couple dozen small purpose-designed RISC cores, running in parallel?Article: 101333
c d saunter wrote: > tomstdenis@gmail.com wrote: > > : Jan Panteltje wrote: > : > http://www.dailytech.com/article.aspx?newsid=1920 > > <snip> > > : 8x200Mhz only provides 400MB/sec traffic to the CPU so really this is > : useful for tasks which either totally reside on the FPGA side of the > : board or have really high latency (e.g. PK work). > > Sitting on the HT bus like that offers residence about as close as you can > get to a mainstream CPU. Given the new HT3 stuff - faster and links > possible over 1 meter - i.e. directly joining blades - I really like this > aproach. Especially given the memory architecture that goes along with > HT/Opterons. It's bringing mainstream CPUs and FPGAs back into the point > to point multiple interconnect world of the TigerSHARCs and the old TI > C40s. > > It feels a bit like a resurgence to the old British Transputer except with > gate arrays mixing with CPUs on an equal footing in terms of connectivity. > > cds Um yes it does look familiar doesn't it. If you go to the origins of HT when it was called something else at AlphaWorks IIRC, the key people had originally come from Inmos and had worked on the PLLs for the Transputer and maybe those links too. The fellow is now a Fellow at AMD after they bought them out. In a previous life, same people were at Meiko and did their own routers used to stitch up T800s then later several other cpus ultimately leading to the Alpha platform after Meiko went belly up. When I first heard Xilinx was taking a HT license, which seems a long time ago now, I wondered when this would happen. When I first saw the early marketing for the Hammer with 1,2,3,4 of these HT links and the memory channel too, I could only say out loud, looks & smells like a Transputer to me with 20yrs development but it isn't really, it doesn't have the process scheduler or any real support for programming concurrently per occam, just links. But when I see the product today with a huge price premium on the no of HT links, I am dissapointed, one Opteron with 1 link is cheap enough, add more links, the cost goes way up as it looks more and more like a server platform. The no of Links on the Transputer was always an issue back then, 4 is a minimum. The socket module though looks a bit like SFF TRAM module but the multi socket Opteron boards are not really TRAM carriers that can be populated with general purpose computing modules on a grid. Perhaps that will come back again but probably with more modest links. I have been suggesting a Transputer resurgence for some time by building an FPGA Processor Element hooked up with a specialized MMU that shares the available memory bandwidth of RLDRAM amongst many PEs using latency hiding Multithreading to make the PEs not appear to have any memory wall. By distributing n.PE+MMUs into the fabric, one can then add algorithm specific extentions or coprocessors to each and copy the node systolic fashion over the array. Each PE only uses only 1 BRam, so quite a few PEs would fit. The Transputer is really now defined by all the good stuff that goes into the MMU rather than the PEs. There is a paper on it at wotug.org for anyone interested. When you build algorithms in FPGA around arrays of customizeable PEs I think some of the reasons for having an Opteron in the system may become moot, put the cpus into the FPGA as many copies as you can get since all the real bandwidth is in/out of all the Blockrams, not the more limited I/O pins. I will have to look more into HT3 though. John Jakson transputer guyArticle: 101334
"JJ" <johnjakson@gmail.com> writes: > I have been suggesting a Transputer resurgence for some time by I always thought it would be neat to design a CPU cell in a QFP fpga, such that all the pins on each side were designed to interface to an adjacent cell - making the PCB routing trivial. The cells along the boundary would be programmed to use the free edges to talk to external peripherals. I suppose with a BGA you could use the outer rows to talk to adjacent cells, and the inner rows to interface to a RAM chip on the other side of the board.Article: 101335
DJ Delorie wrote: > "JJ" <johnjakson@gmail.com> writes: > > I have been suggesting a Transputer resurgence for some time by > > I always thought it would be neat to design a CPU cell in a QFP fpga, > such that all the pins on each side were designed to interface to an > adjacent cell - making the PCB routing trivial. The cells along the > boundary would be programmed to use the free edges to talk to external > peripherals. > Given the FPGA resources needed for 1 PE, 1 BRam & about 500 Luts/FFs and then putting around 10 with a shared MMU which requires unknown resources at this time, one might get a combined resource figure that is still insignificant compared to the size of the largest FPGAs that would likely be placed in these 940 sockets. Each MMU uses more resources than a few PEs but also would chew up a good portion of I/Os pins say 120 or so for 1 RLDRAM interface and more for external links. It becomes obvious one is really I/O limited or content limited so an array of much smaller FPGAs makes more sense on a TRAM carrier type board. Then every FPGA might get 4 MMU memory systems giving effectively 40 or so PEs running at 300Mhz or 100Mips each. The total 40x100mips still doesn't look so good compared to 1 Opteron, but the system is very different. You end up with 160 or so threads since each PE is a 4 way MTA, you have to have every thread busy and that requires occam or HDL like parallel programming v possibly only 1 thread on an Opteron. The big payback is that all these threads get to see almost no memory wall with full random access over their local memory banks with some additional latency for nearby MMUs and more so for off FPGA nodes. You either have a thread wall or you have a memory wall. The thread wall is not really a problem for occam, csp, Transputer, parallel people but is a huge barrier to most Opteron customers. The memory wall though is a real problem requiring possibly 1000 clock cycle memory accesses for all accesses that miss the cache system and caches can never be big enough for the sorts of datasets some have in mind, nor can the TLBs have enough asssociativity. I believe these memory walls are most likely halving typical throughput of sequential cpus for even a modest miss rate. Thats why I am prone to suggesting getting rid of the Opteron and put the cpus right inside the algorithm with local copros per PE or better still per MMU. One such copro could be a FP unit which uses the same reasoning as the MMU. If a FP unit can deliver 1 flop per clock shared over 40 threads, each thread gets FP slices with very little latency in the order of a load, store op. > I suppose with a BGA you could use the outer rows to talk to adjacent > cells, and the inner rows to interface to a RAM chip on the other side > of the board. I haven't really worried too much about packaging BGA v edge connected, I suspect that the medium size parts are big enough to hold enough PEs and use up the I/O for RLDRAM and some for HT like links. I would probably put each FPGA & related RLDRAM on its own module so it would look a little like these DRC modules or really a SFF modern TRAM. That separates the module design from the motherboard design and then you can get some volume on these modules. Don't even ask why I wouldn't use regular SDRAM, about 20x less random throughput, would effectively limit me to only 1-2 or so PEs per MMU, and that would leave the FPGA almost empty. John Jakson transputer guyArticle: 101336
For any processor with no substantial caches, one might assume every 5th opcode is a load or store, for a nice register heavy design, maybe every 10th opcode. For a classic SDRAM interface the performance will be very poor. The usual thing to do is to gang up lots of very expensive Brams into I, D caches which gives up alot of the parallel bandwidth they each have when used separately. Even still each core now uses lots of Bram, some cpu logic, an SDRAM controller and a good chunk of the I/O is gone. That sort of system can be replicated maybe 4 times depending on I/O count and none of these has any performance to write home about. But one could put additional algorithmic content next to each node. Memory limits and hence I/O pads is the crux of the problem. My Transputer design uses 1 Bram/PE hence on paper maybe 554 PEs might fit in the biggest FPGA but that doesn't work. The Lut/Bram useage takes it down to half that and then assume the MMUs consume the rest of the fabric in a regular tileing. Still the memory traffic of 250 odd PEs can't be funneled through maybe only 4 memory interfaces even RLDRAM, so the PE count either has to come way down and or more of the Brams have to be used as local caches which gives up alot of their bandwidth again. One way around the I/O limit I have been thinking of is to bring the RLDRAM inside the FPGA. SInce we can't do that, instead replicate the RLDRAM logical architecture of n concurrent slower banks using up all remaining BRam aggregating them into cache that can be shared with multiple PEs at the L1 level. Only when those miss does the L2 RLDRAM come in to play, so trading down PEs for Bram caches allows more Transputer nodes to share the few RLDRAM interface. .( (n*PE + MMU + Bram cache)*k + MMU + RLDRAM interface) *4 or so. Q I am curious about how many separate memory channels people have actually put onto the largest FPGAs, I suspect on the highend for independant RLDRAM controllers it is around 4 due to specialized use of the clock resources needed to make the DDR interfaces work. I also wonder if these serial interface DRAMs have come out yet that would allow many more memory channels to per FPGA. John Jakson transputer guyArticle: 101337
c d saunter wrote: > JJ (johnjakson@gmail.com) wrote: > : In comp.arch (and others) there is a thread on this Opteron Virtex4 > : coprocessor that sits in socket 940. > > : http://www.dailytech.com/article.aspx?newsid=1920 > : http://www.theregister.co.uk/2006/04/21/drc_fpga_module/?www.dailytech.com > : http://www.drccomputer.com/pages/products.html > > : I wonder what others think of this, at $4500 its way to steep for most > : individual buyers who might happen to have a dual socket Opteron board > : (I don't), but I wonder if companies like Digilent, Enterpoint and > : others might see any opportunity to build a much lower cost edu version > : that is more in line with the cost of an Opteron cpu chip say <$1k and > : based on best Spartan3 or Virtex2,4 that can still use Webpack. > > John, > I expect the $4500 reflects development costs and what the market will > bear more than anything else - after all the only thing I've seen near it > was the old Pilchard FPGA on a DIMM research project. > Ofcourse, I recall Xilinx VC funded that or another FPGA-DIMM company, not quite the right time. > : I also wonder how much faster exactly the HT link is over any of the > : PCI interfaces. > > HT can be implemented much faster than parallel PCI but perhaps more > importantly when used with an Opteron is that it's much more tightly > coupled to the CPU. Looking at the upcoming HT3 I feel the best is yet > to come. Much lower latency I hope. > > One of the posibilities that interest me is a V4 module sitting in a 940 > socket with some MGTs wired up (space is tight - I realise that!) - if > you happen to be dealing in high speed data aquisition and processing on > the bit/word and (large) frame level then a tightly coupled > FPGA/commodity CPU system is really quite exciting. > Takes one back to when you could interface 68000s or whatever to your custom logic or even wire wrap your own mobo, its been along time since one could "touch" a processor. Whats you acquisition area? > cds > > : John Jakson > : transputer guyArticle: 101338
Weng Tianxiang wrote: > Hi, > I am the first time user of Xilinx ChipScope. Now the waveform file > *.vcd outported from Xilinx ChipScope is in lab computer, and I want to > review the same waveforms in my office before my computer. > > Is there any good way to do it? > > Thank you. > > Weng Google is your friend. Try a search for "vcd wave viewer".Article: 101339
RobJ wrote: > Rickman - > > I have no bones to pick with you and I should know better than to dive into > this pissing match between you and Austin, but from your posts on this group > you come of to me as a perpetually exasperated and pissed off person who is > probably not a barrel of laughs to be around. Case in point is all of your > ranting posts about those darn S3 pullups. I don't have a problem with you > being a little pissy now and then, but I do have a problem with you taking > shots at anyone else's personality. Those in glass houses, bro'. By the way, > I disagree with your assessment of Austin's on-line personality. He gets > passionate, but unlike you is almost always curteous (except with Leventis > from Altera). > > Rob I was actually pleasantly impressed with how civil both parties have been in this discussion. It's tough being chewed at by anal design reviewers nitpicking every aspect of your design which requires an official point-by-point follow up afterward to maintain ISO9001 compliance. Yeesh! I still haven't seen definitive answers for rickman's specific issue which should be able to be answered by the data sheet or supporting literature. It's a difficult situation for all parties involved. I appreciate the restraint shown by all.Article: 101340
"Jim Granville" <no.spam@designtools.co.nz> schrieb im Newsbeitrag news:4452a5a5@clear.net.nz... > Stephen Williams wrote: > >> Antti wrote: >> >>>the answer is almost always: Yes/No >>> >>>all the in-system-programming and JTAG stuff is not as much standard as >>>it could be. >>> >>>there have been many attempts to develop vendor neutral or at least >>>multi-vendor technologies but all attempts have failed so far. >>> >>>its seems that big boys have big issues playing it nicely in a (common) >>>sandbox, so almost all vendors have some 'special' things making the >>>'generic' things not fully useable. >>> >>>xilinx has XSVF a binary version of SVF, some Xilinx parts can not be >>>programmed with standard SVF, >>>lattice has SVF-Plus >>>altera has its own flavors of JAM/STAPL >>>actel has its own flavors of STAPL >>> >>>if you think a SVF player is a SVF player is a SVF player, then no it >>>isnt, >>>same for JAM/STAPL >> >> >> So the issue is not whether one count send a properly formatted >> SVF file stream through a generic player and get a PROM/FPGA to >> become programmed, but that one can't easily get that SVF string >> in the first place. > > Hmm.. I read what Antti said as more > "Not all SVF strings are created equal.." > > >> Bear with me, I really don't know. I just have in front of me a >> printout of "Serial Vector Format Specification" and some wishful >> thinking. > > Also, if the vendor uses non-binary SVF themselves, then you can expect > valid files, but if they export SVF as some tick-box option, then > expect lower yields... > > -jg > nops - even vendor generated ASCII SVF/STAPL are not compatible :( AnttiArticle: 101341
On 28 Apr 2006 12:39:05 -0700, fpga_toys@yahoo.com wrote: > >Eric Smith wrote: >> >> OK, but that still doesn't explain what "laws" were broken. > >.... come on now ... spliting symantic hairs are we? Shanon's work is >described by some as a law, even though most of us understand it's >really a theory that has never been proven mathmatically. Ditto with >"Amdahl's Law" and a host of similar predictions. Common knowledge >proofs based on state of the art and general exceptance, generally are >considered informally as a law ... with Amdahl's speculations being >just one of many widely held belief's as a folk "law". > >> "Common knowledge" is much different than "laws". > >Hardly, except maybe for some precise mathmatical niches. > >When I was taking engineering classes in the early 1970's one reather >lengthy lecuture was on modems along with a detailed "proof" why modems >would not get faster than 600 baud based on Nyquist Sampling Theorem, >Shannon's work, and a few others. The rather lengthly presentation >described spliting a 3K hertz bandwidth in half, for full duplex, >choosing carrier freqencies that were not harmonics, and the minimum >number of carrier cycles necessary to decode a symbol. > >In retrospect, the proof contained some assumptions that have since >been invalidated, mostly because of improvements in the phone line >noise factors. With lower noise (cross talk) came the ability to design >around a higher bandwidth (Shannon's Theory). I thought we had been discussing Shannon's Channel Capacity *Theorem*. But I see you must be referring to something else. What exactly? AllanArticle: 101342
On Fri, 28 Apr 2006 15:55:32 -0700, Austin Lesea <austin@xilinx.com> wrote: >tomstdenis, > >http://www.xilinx.com/bvdocs/ipcenter/data_sheet/Helion_Standard_AES_AllianceCORE_data_sheet.pdf > >Shows some of the claimed clock rates for their AES encrypt/decrypt IP >core. 257 MHz (V2 Pro) to 252 MHz (V4). Throughput in b/s is ~ 2 to 3 >X the clock rate (per this datasheet). Other cores run just shy of 200 MHz. > >Other data from this same vendor makes claims of up to 20 Gbs for >throughput of their 'fast' FPGA based AES encryptors and decryptors. > >At one time we made a 10 Gbs decryptor to prove that distributing full >resolution theater real time movies could be done with one FPGA in the >'projector.' This prevents piracy by decrypting the movie at the >projector itself (at no time is the full digital information available >for copying). > >This was back in the Virtex II days, so the 20 Gbs claim is perfectly >reasonable for V4 today (IMO). 20Gb/s was perfectly reasonable for V2P a few years ago. I can't tell you how I know that :) Allan > >There are a number of other IP vendors with encryptors and decryptors >for our FPGAs. > >http://xgoogle.xilinx.com/search?output=xml_no_dtd&ie=UTF-8&oe=UTF-8&client=iplocator&proxystylesheet=iplocator&site=IPLocator&filter=0&_ResultsView=Standard&num=25&q=aes&as_q=&getfields=*&newSearch=http://www.xilinx.com/xlnx/xebiz/search/ipsrch.jsp&formAction=http://www.xilinx.com/cgi-bin/search/iplocator.pl&IPCategory=&IPSubcategory=&sGlobalNavPick=&sSecondaryNavPick=&requiredfields=IPProducts&partialfields= >or >http://tinyurl.com/hajhj > >Austin > >tomstdenis@gmail.com wrote: > >> Paul Rubin wrote: >> >>>tomstdenis@gmail.com writes: >>> >>>>The typical AES core takes ~14 cycles to encrypt but in FPGAs normally >>>>run at most at a couple hundred MHz at most [usually topping out >>>>between 100 and 200Mhz at most]. 200Mhz is 13 times less than 2.6Ghz >>>>which is equivalent to 182 cycles at 2.6Ghz. This is less than the 256 >>>>cycles that the Opteron takes but only marginally so. >>> >>>I'd think if you're going to use such an expensive and exotic approach >>>at all, you'd pipeline it to get one AES operation per cycle, maybe >>>even more than one if you're doing something like EAX mode, or CTR >>>mode ona large block in parallel. >> >> >> Even with pipelining you're still on a fairly limited bus. At best you >> top out at whatever the bus between the two actually is. Keep in mind >> this is an FPGA and not ASIC. So chances are it won't clock that high >> anyways. My 200Mhz quote is just a really really optimistic quote. >>>From what I recall from my past job you'd be lucky to get something >> complicated like a PDU clocking higher than PCI freq [33Mhz]. So while >> you could get an AES core ~100Mhz it would only be doing CTR mode at >> most. >> >> Block ciphers are not where this will shine. Specially when the other >> processor is an Opteron. >> >> The trick to making good use of something like an FPGA isn't serial >> speed. Even if you designed a custom RISC ALU on the FPGA it'd clock >> probably around 50Mhz. Even with the best ISA you can craft for it the >> Opteron could EMULATE the thing faster than you could run it. Where >> the FPGA will shine is for tasks with a LOT of parallel computation. >> Think like 16 FPU pipelines or a single cycle GF(2) multiplier, etc, >> etc, etc. >> >> Other tasks where this would shine would be custom DSP filters, e.g. >> offload MPEG work. A FIR or IIR filter of significant delay [e.g. >> accuracy] could be constructed in a pipeline to get 1 sample/cycle at >> decent clock rates. >> >> Tom >>Article: 101343
Try GTKView, Dinotrace, VCD Viewer or vcd2wlf if you have access to Modelsim, http://www.geocities.com/SiliconValley/Campus/3216/GTKWave/gtkwave-win32.html http://www.iss-us.com/wavevcd/index.htm http://www.veripool.com/ Hans www.ht-lab.com "Weng Tianxiang" <wtxwtx@gmail.com> wrote in message news:1146268229.884617.298710@e56g2000cwe.googlegroups.com... > Hi, > I am the first time user of Xilinx ChipScope. Now the waveform file > *.vcd outported from Xilinx ChipScope is in lab computer, and I want to > review the same waveforms in my office before my computer. > > Is there any good way to do it? > > Thank you. > > Weng >Article: 101344
Hi all, Is the Xilinx's site working ok? for me almost all the links are not working!! Thanks in advance for quick replyArticle: 101345
GaLaKtIkUs=99 wrote: > Hi all, Is the Xilinx's site working ok? for me almost all the links > are not working!! > > Thanks in advance for quick reply I just tried one link at random, and had a problem: < http://www.xilinx.com/xlnx/xebiz/designResources/ip_product_details.jsp?key= =3DHW-S3PCIE-DK > LeonArticle: 101346
Hi Francesco, Thanks for your effort, Take a look at the following article for accessing jtag port in a user design. http://www.xilinx.com/publications/xcellonline/xcell_53/xc_jtag53.htm Best regards, Alper YILDIRIM > Does anybody know how to read/write the BRAM using the JTAG? > I need this to design the debugger....Article: 101347
I can access the marketing side of things, but the documentation pages return: "An HTTP error occurred while getting: http://www.xilinx.com/xlnx/xweb/xil_publications_index.jsp?category=3DUser+= Guides Details: "connect timed out"." Even when its up, Xilinx's site is slow and I encounter dead links frequently. GaLaKtIkUs=99 wrote: > Hi all, Is the Xilinx's site working ok? for me almost all the links > are not working!! >=20 > Thanks in advance for quick replyArticle: 101348
Stephen Craven wrote: > I can access the marketing side of things, but the documentation pages > return: > > "An HTTP error occurred while getting: > http://www.xilinx.com/xlnx/xweb/xil_publications_index.jsp?category=3DUse= r+Guides > Details: "connect timed out"." > > Even when its up, Xilinx's site is slow and I encounter dead links > frequently. > > > GaLaKtIkUs=99 wrote: > > Hi all, Is the Xilinx's site working ok? for me almost all the links > > are not working!! > > > > Thanks in advance for quick reply Ditto for support site: support.xilinx.com (main page shows up but attempt to log in times out)Article: 101349
We had a massive network problem inside Xilinx in San Jose, but that was resolved on Friday. Peter Alfke
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z