Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Well google can find "ISA bus specifications", although I forget the org for home hit. You could look at using PC104+ and stick with small form factor PCI std, much closer to home with newer parts and much faster and just google groups for PCI FPGA stories. johnjakson at usa dot comArticle: 86401
This kind of behaviour is probably determined by the chip set, but is quite understandable. You have to think about it from the point of view of the entire system. A PCI 32/33 bus is running MUCH slower than the PC is running its SDRAM bus; the PCI is 132MB/s, whereas the internals of the CPU is probably accessing the SDRAM at 200+MHz at 64 or 128bit/clock (so in the range of 3.2+GB/s). One of the "weaknesses" of the PCI protocol is that the desired length of an access is not signalled at the beginning of the burst. During a burst, the target does not know if the burst is going to be 2 DWORDs long, or 128DWORDs or 1024DWORDs. If the entire system were running at 132MB/s, the PC could "slave" the SDRAM to the PCI - continue to fetch words from the SDRAM until the PCI master says "that's enough". That way, you could get uninterrupted PCI read bursts up to the length determined by the master (unless the target needs to stop - say for a page break, or the arbiter interrupts with a higher priority access). However, in a system like the PC, where the SDRAM is running so much faster, this is extremely inefficient; the internal bus would be idle (wasted) for 90%+ during PCI transfers. No PC design would do this... So instead, the PCI target for the PC chipset picks an "arbitrary" length for a read access when it arrives. The decision for the length is determined by a bunch of things; if the choice is too long (say 128 DWORDs), then there is a good chance that a lot of SDRAM bandwidth can be wasted; if the target is only requesting 2 DWORDs, then the other 126 must be thrown away. The length chosen should also be "big enough" so that the SDRAM isn't inefficiently used; SDRAMs are very inefficient at single word accesses - it takes a lot of clock cycles to open the page and access the column; once that is done, you can burst data very quickly. Then the chipset needs a FIFO between the SDRAM and the PCI target; gates are relatively cheap but still, you don't want to go overboard. So, this chip manufacturer chose 8 DWORDs - thats probably a pretty typical choice for this application. Its odd that the chip chooses to do a disconnect without data transfer, rather than a disconnect with the 8th word of data, since this wastes an extra cycle on the PCI bus, but as another poster pointed out, efficiency on the PCI bus is not top on the list of things that are important in the design of a PC. There is probably nothing you can do about this... It is possible, that you may be able to change this by changing something deep in the PC bios; it is possible that the FIFO is larger, but the BIOS implementer chose only to use 8 DWORDs, but without knowing the chipset registers in detail (something which is probably not available to the public, and even if it is, something that is probably unwise to change), you won't be able to tell. A different chipset could behave differently, but I wouldn't expect any set to be significantly better; a different chipset might do a disconnect on the 8th word (rather than after), or might do 16 DWORDs, or maybe even 32 (although I doubt it), but I would be extremely surpised if any chipset would do more than that due to the limitations decribed above... Avrum "uxello" <uxello@free.fr> wrote in message news:42c01f5e$0$31775$636a15ce@news.free.fr... > Thank you Ron for your reply: > > > Writes are always much faster in PCI since most bridges provide FIFOs and > > allow posted writes. > > Probably, but SDRAM can accept bursts larger than 8-words, > I do understand the pretty long latency (around 12 clocks), > due to the host bridge, but I don't understand the 8-word > limitation. > > Do you think the limitation is downto the chipset or the > sdram and in a more recent PC, I would not have such a > limitation ? > > > I wouldn't call 50 MByte/second reads into the PC's SDRAM slow! That's > > about as fast as I'd expect (based on experience) in a 32/33 machine. What > > kind of speeds were you expecting and, more importantly, what do you need? > > The thing is that to be efficient, I have to burst out from SDRAM > 1024 32-bit words, and burst into SDRAM 768 words, and this results > in a bus monopolization of around 121us for reads, and less than 50us > for writes. Which is more accepatble. Moreover, writes are devided in > 'small' 48-word bursts, giving a monopolization of just 3us per burst. > > The total PCI bus load in my application is: > Acquisition card -> SDRAM : input bitrate=100Mbit > SDRAM -> FPGA card : transfer bitrate=125Mbit > FPGA card -> SDRAM : transfer bitrate=100Mbit > SDRAM -> Ethernet card : output bitrate=100Mbit > > As you can see, if my 125Mbit (SDRAM->FPGA) takes actually > the equivalent of a 250Mbits transfer because of poor > latency and short burst for read accesses, the arbiter > will have a hard job ! > > > > Well, since this is the PC's sdram, they may want to limit the bandwidth you > > get into it. > > You are probably right, but their might be a meant to > disable this bandwidth limitation, your don't think so ? > Why can i have a 1Gbit/s in one sens, and only 400Mbit/s > in the other sens ? > > Regards, > UxelloArticle: 86402
Joseph H Allen wrote: > (if only you could use the blocking assign output as an input to another > clocked always block... but you can't). True. A blocking assignment is like a local variable. However, you *can* make the always block as big as you like with as many registers as you like. -- Mike TreselerArticle: 86403
The PCI bus Memory Read Multiple command can give one cache-line transfer and prefetch the next or can provide two cache-lines of data depending on the bridge (I've seen both). I scanned the Intel 82443 data sheet and didn't see any elaboration on Memory Read Multiple or on cache-line size. It may be that the 82443 can handle cache-line sizes of other than 8 Dwords. A PCI peripheral configured by the system can have the cache-line size defined by an 8-bit field in the PCI configuration space. The Xilinx PCI core doesn't automatically monitor this register but it's easy to eavesdrop on the configuration cycle. The bridge might handle larger cache-lines. Check deeper into the Intel bridge to find out if you can define a cache-line size other than 8 Dwords. It may be information buried in an app note or a users' guide that isn't communicated quickly in the data sheet. "uxello" <uxello@free.fr> wrote in message news:42c00678$0$31776$636a15ce@news.free.fr... > Hi all ! > > thank you for reading this post. > I'm experiencing some problems to get good data transfer > performance using a PCI core in an FPGA directly linked > to a PCI connector in a PC (PCI 32bit-33MHz). > > The PCI core and the FPGA seem to not be the reason of > the problem. > > The FPGA is most of the time acting on the bus as Master, > accessing directly in the system SDRAM. > > Write accesses to SDRAM are very fast, since I can burst > as many words as I want (in my case, 48 words), resulting > in a 130MB/s bandwith. > > However, read accesses bursts are limited by the target > (SDRAM controller or just the PCI arbiter, I don't know) > to eight word transfers, resulting in a very poor 50MB/s > bandwidth. The target always asserts the STOP# pin after > the 8th word transfer resulting in a "disconnect without > data transfer". > > All my memory accesses (read and writes) are linearly > addressed. > > Does anyone has an idea of how I can setup my system so > I can achieve to have 64-word bursts for Read accesses > instead of 8-words ? > > My system is an Intel Pentium III 600MHz running under Linux > > Best Regards, > Uxello > > > lspci gives me this: > 00:00.0 Host bridge: Intel Corp. 440BX/ZX - 82443BX/ZX Host bridge (rev 03) > 00:01.0 PCI bridge: Intel Corp. 440BX/ZX - 82443BX/ZX AGP bridge (rev 03) > 00:07.0 ISA bridge: Intel Corp. 82371AB PIIX4 ISA (rev 02) > 00:07.1 IDE interface: Intel Corp. 82371AB PIIX4 IDE (rev 01) > 00:07.2 USB Controller: Intel Corp. 82371AB PIIX4 USB (rev 01) > 00:07.3 Bridge: Intel Corp. 82371AB PIIX4 ACPI (rev 02) > 00:11.0 VGA compatible controller: Silicon Integrated Systems [SiS] > 86C326 (rev 0b) > 00:14.0 Network and computing encryption device: Xilinx, Inc.: Unknown > device cafe (rev 01) > > I setup the FPGA config regs as follows (lspci -vv): > 00:14.0 Network and computing encryption device: Xilinx, Inc.: Unknown > device cafe (rev 01) > Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- > Stepping- SERR- FastB2B- > Status: Cap- 66Mhz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- > <MAbort- >SERR- <PERR- > Latency: 128 (32000ns min) > Interrupt: pin A routed to IRQ 11 > Region 0: Memory at e0070000 (32-bit, non-prefetchable) [size=256] > Region 1: Memory at e0071000 (32-bit, non-prefetchable) [size=256] > Region 2: Memory at df800000 (32-bit, non-prefetchable) [size=8M]Article: 86404
Thanks, Ray, Is it hard to design a down-converter and a up-converter inside of FPGA? I do not have experience to design converters. Will the converters take lots of resources of FPGA? Johnson "Ray Andraka" <ray@andraka.com> wrote in message news:M0lue.17936$FP2.9356@lakeread03... > Johnson Liuis wrote: > >>Does anybody have filter design experience with FPGA? I would like to know >>a general picture with recent FPGA technologies like XtremeDSP and others. >>I am also curious about the limitation of FPGA design on filter design, >>like the maximum center frequency and bandwidth of the filters that can be >>implemented with FPGA. Could anybody let me know if I am able to simulate >>a SAW (surface acoustic wave ) filter with 185MHz center frequency and >>4MHz double-side bandwidth, and Max. 20dB insertion loss inside of a FPGA? >> >>Any information will be highly appreciated. Thanks in advance. >> >>Johnson >> >> > The FPGA doesn't limit your filter design other than max sample rate (and > even then there are work-arounds). The DSP48 slices in Xilinx Virtex4 can > do 500 MS/Sec filters provided the data and coefficients are 18 bits or > less and you choose a device with enough DSP48 slices to fit one slice per > filter tap. Remember, an FPGA is simply a medium in which you realize a > digital logic circuit. > > As to your particular example, you've got a rather narrow bandpass filter. > Filtering such a narrow band relative to your sample rate is going to > require a high order filter (assuming FIR filter in order to get the > linear phase characteristic of the SAW) if you insist on doing the > filtering at the input sample rate. It is far more efficient to > downcovert the signal to complex baseband, filter it with a 4 MHz low pass > filter and then upconvert it back up to your 185 MHz center frequency. > This way, the filter has a much wider passband relative to the sample > rate, and therefore is much simplier to realize, both in terms of number > of taps (coefficients) and in data rate. The hardest part of this design > would be digitizing the data and getting it into the FPGA, and that is > quite doable. > > -- > --Ray Andraka, P.E. > President, the Andraka Consulting Group, Inc. > 401/884-7930 Fax 401/884-7950 > email ray@andraka.com http://www.andraka.com > "They that give up essential liberty to obtain a little temporary safety > deserve neither liberty nor safety." > -Benjamin Franklin, 1759 > >Article: 86405
Benjamin Menk=FCc wrote: > Hi, > > I have built a dvi to lvds converter. However the picture still flickers > and is off by 1 pixel. I think I am having trouble with the vsync. Is > the vsync on the tmds line the same as on the analog port? Does anybody > know if the vsync just does one pulse per frame, I think so but I am not > sure... VSync is one pulse per field, which is to say one pulse per frame in a non-interlaced image. > In my EDID Data I find this: > V Active Lines 1024 <-this is okay Should be obvious; you'll display 1024 active lines. > V Blank 42 <- what does this mean? Vblank means "how many lines are part of the vertical blanking time." > V Sync Offset 1 <- offset to what? This means, "The line, relative to the start of the vertical blanking time, on which we assert the vertical sync pulse." This is always confusing, depending on whether you consider the first line to be line One or line Zero, but I suspect this means that your vertical sync pulse begins at the start of the vertical blanking time. > V Sync Width 3 <- I guess it means that my pulse is 3 lines? This tells you the width of the vertical sync pulse, given in units of lines. So, yes, this means that your VSYNC will be active for three line times. > V Image Size This should be the total image size, which is the sum of the number of active lines and the number of blanked lines. -aArticle: 86406
Some variants of the 440xx chipset needed to have a bit set in the configuration space of the host bridge to enable PCI read streaming. >From what I have read, it doesn't work on the 440BX, though. You could try it -- I believe it was bit 1 in config reg 0x50. Search around in google to be sure. I have never tried it myself. It is not clear to me whether this is 100% safe or whether it may not work reliably all the time. I think this is an undocumented bit. You also want to make sure you are issuing READM or maybe READL commands to signal to the host bridge that you want a lot of data. Again, I don't know if this will help with the 440BX. A lot of chipsets have relatively poor performance for PCI burst reads. -EwanArticle: 86407
has anyone used Summit's Visual Elite tool to do schematic/HDL based design, how does this compare with other tools. I believe this is an expensive tool. Is it worth the price? -- Geoffrey Wall Masters Student in Electrical/Computer Engineering Florida State University, FAMU/FSU College of Engineering wallge@eng.fsu.edu Cell Phone: 850.339.4157 ECE Machine Intelligence Lab http://www.eng.fsu.edu/mil MIL Office Phone: 850.410.6145 Center for Applied Vision and Imaging Science http://cavis.fsu.edu/ CAVIS Office Phone: 850.645.2257Article: 86408
In article <3iaqseFkg3klU1@individual.net>, Mike Treseler <mike_treseler@comcast.net> wrote: >Joseph H Allen wrote: >> (if only you could use the blocking assign output as an input to another >> clocked always block... but you can't). >True. A blocking assignment is like a local variable. >However, you *can* make the always block as big >as you like with as many registers as you like. So... you should make the entire design in just one always block? :-) Verilog is a crazy language. I write code in order, but you really don't have to: // Move ~c into a always @(b or c) begin a = b; b = c; b = ~c; end A change in c triggers the always block: ~c moved into b. Change in b retriggers the always block: b moved into a. I guess synthesis tools have to make a dependency graph to figure out the dataflow. -- /* jhallen@world.std.com (192.74.137.5) */ /* Joseph H. Allen */ int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0) +r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2 ]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}Article: 86409
Ourah !! Thank you John for your advice ! I did test to do a 'Memory Read Multiple' instead of a 'Memory Read' and now my master reads sdram using very long burst (1024 words) !! The PCI monopolization for reads is now 31us instead of 120us I also thank all other people who replied to my post for their expertise ! Best regards, Uxello John_H wrote: > The PCI bus Memory Read Multiple command can give one cache-line transfer > and prefetch the next or can provide two cache-lines of data depending on > the bridge (I've seen both). I scanned the Intel 82443 data sheet and > didn't see any elaboration on Memory Read Multiple or on cache-line size. > > It may be that the 82443 can handle cache-line sizes of other than 8 Dwords. > A PCI peripheral configured by the system can have the cache-line size > defined by an 8-bit field in the PCI configuration space. The Xilinx PCI > core doesn't automatically monitor this register but it's easy to eavesdrop > on the configuration cycle. The bridge might handle larger cache-lines. > > Check deeper into the Intel bridge to find out if you can define a > cache-line size other than 8 Dwords. It may be information buried in an app > note or a users' guide that isn't communicated quickly in the data sheet.Article: 86410
Hello, I'm trying to imlpement an USB 1.1 Device on FPGA (Virtex2). Therefore I first used the USB 1.1 Core from Rudolf Usselmann on opencores.org. I got it to enumerate correctly on my Win2K host and I could transmit data through two bulk pipes. But I couldn't solve problems with several lost bytes in the pipes. Also I can't imagine how this core can implement CRC error correction because therefore it would have to delete some data from the external FIFOs it is connected to. Because of this and other reasons (like better documentation and concept and an easy upgrade to USB 2.0) I'm now trying to use the USB 2.0 core with my USB 1.1 tranceiver. I connected the UTMI lines from the "USB 1.1 PHY" project on opencores to the 2.0 core. Because that didn't work (I think because of the speed negotiation) I replaced the "usbf_utmi_if" modul in the 2.0 core with that from the originally used 1.1 core and added the additional signals. The core and the physical layer are running at 48 MHz and I've adapted the settings in the defines.v at the two relevant positions to 48MHz. Now I can read framenumbers in the Frameregister and find a valid setup packed (GetDevice Descriptor) in the first 8 byte of endpoint 0 OUT buffer. But the buffersize value doesn't change after the reception of the setup packed. My question is has anyone used that core and can tell me, how to detect setup packeds? Will it decrease the buffersize value by 8 or are these packeds indicated in an other way? I have initialized the core with this values: @0x40 <- 0x00010040 @0x48 <- 0x00800000 @0x4c <- 0x00800040 Could it be a problem with wrong timeout values? I read something, that the original values are a litte to small. Or could that be a problem with the UTMI interface although I can read a valid setup packed? Thanks, MichaelArticle: 86411
Hello, I would like to build a 1GBit/s data encryptor/decryptor using an FPGA chip, but I have a big problem with an appropriate chip. It should contain about 3000LE, 70 IO pins and at least 12 dual-port RAM blocks (I need two read ports per block) configurable as 512x8 banks. Additionally, it should be Flash-based or SRAM -based with encrypted bitstream. And must be cheap. Here are the options I know of: 1. Altera Cyclone 1C3-8. It is perfectly suited for my needs, and is very cheap. There is extremely good design software available. But it can't be used: it's totally unsecure. 2. Actel ProASIC+: flash-based and cheap, but their memory blocks are not big enough and have only one read port. Terribly bad software quality. 3. Actel ProASIC3: good, moderately expensive but not yet available. 4. Lattice XP: unknown price, unknown availability, technically suitable, I've heard many bad opinions about Lattice, but personally I have no experience with their chips. 5. Xilinx Virtex4: too powerful and thus probably much too expensive, good availability, very good support [:-)], software quality unknown, but probably comparable to Quartus. Could you please write something about remaining options? Best regards Piotr WyderskiArticle: 86412
"Michael Dreschmann" <michaeldre@gmx.de> schrieb im Newsbeitrag news:42c045ca.11287630@news.rhein-zeitung.de... > Hello, > > I'm trying to imlpement an USB 1.1 Device on FPGA (Virtex2). Therefore > I first used the USB 1.1 Core from Rudolf Usselmann on opencores.org. > I got it to enumerate correctly on my Win2K host and I could transmit > data through two bulk pipes. But I couldn't solve problems with > several lost bytes in the pipes. Also I can't imagine how this core > can implement CRC error correction because therefore it would have to > delete some data from the external FIFOs it is connected to. > > Because of this and other reasons (like better documentation and > concept and an easy upgrade to USB 2.0) I'm now trying to use the USB > 2.0 core with my USB 1.1 tranceiver. I connected the UTMI lines from > the "USB 1.1 PHY" project on opencores to the 2.0 core. Because that > didn't work (I think because of the speed negotiation) I replaced the > "usbf_utmi_if" modul in the 2.0 core with that from the originally > used 1.1 core and added the additional signals. The core and the > physical layer are running at 48 MHz and I've adapted the settings in > the defines.v at the two relevant positions to 48MHz. > Now I can read framenumbers in the Frameregister and find a valid > setup packed (GetDevice Descriptor) in the first 8 byte of endpoint 0 > OUT buffer. But the buffersize value doesn't change after the > reception of the setup packed. > My question is has anyone used that core and can tell me, how to > detect setup packeds? Will it decrease the buffersize value by 8 or > are these packeds indicated in an other way? > > I have initialized the core with this values: > @0x40 <- 0x00010040 > @0x48 <- 0x00800000 > @0x4c <- 0x00800040 > > Could it be a problem with wrong timeout values? I read something, > that the original values are a litte to small. Or could that be a > problem with the UTMI interface although I can read a valid setup > packed? > > Thanks, > Michael this core has been used. but those who have have not committed the changes back to the openocores repository I guess. to my knowledge all of the USB cores at opencores have some problems. so be prepared to long troubleshooting and bug/problem fixing. anttiArticle: 86413
"Piotr Wyderski" <wyderskiREMOVE@ii.uni.wroc.pl> schrieb im Newsbeitrag news:d9pkb0$oas$1@panorama.wcss.wroc.pl... > Hello, > > I would like to build a 1GBit/s data encryptor/decryptor using > an FPGA chip, but I have a big problem with an appropriate > chip. It should contain about 3000LE, 70 IO pins and at least > 12 dual-port RAM blocks (I need two read ports per block) configurable > as 512x8 banks. Additionally, it should be Flash-based or SRAM > -based with encrypted bitstream. And must be cheap. Here are the > options I know of: > > 1. Altera Cyclone 1C3-8. It is perfectly suited for my needs, > and is very cheap. There is extremely good design software > available. But it can't be used: it's totally unsecure. with some trick a RAM based FPGA can be made secure as well but generically its nogo for security related stuff > 2. Actel ProASIC+: flash-based and cheap, but their > memory blocks are not big enough and have only > one read port. Terribly bad software quality. CORRECT, the software is a mist > 3. Actel ProASIC3: good, moderately expensive but not yet available. CORRECT, ProAsic3 600 chips was there (i did see it!) November 2004, but in generic no PA3 devices are available. And the software is the same. > 4. Lattice XP: unknown price, unknown availability, technically suitable, > I've heard many bad opinions about Lattice, but personally I have no > experience with their chips. XP10 is available. When I asked a disti (WBC) where I could get XP, the answer was "from me", ie the disties are able to ship immediatly. > 5. Xilinx Virtex4: too powerful and thus probably much too expensive, > good availability, very good support [:-)], software quality unknown, > but probably comparable to Quartus. software is no OK. I guess, any comment: "comparable to Quartus" is understood as VERY bad insulting comment in the Xilinx side of the world. But you are right, the software OK. expensive, yes expect any V4 to be >=100USD > Could you please write something about remaining options? there isnt much, quicklogic is OTP, secure and ok, but OTP Atmel AT94Sxx are also secure but software sucks bad times as well and is not free Antti > Best regards > Piotr Wyderski > > >Article: 86414
Hello Stuart, > Is it practical to put 2/4/8 processors in a single > device and have them share a single DDR DRAM memory > subsystem ? > > -- Stuart > If you use a good DDR DRAM controller, ie. not the standard one supplied by Xilinx, it gives resonably good performance, especially if you can squeeze in a cache on the instruction and/or data ports. I've made such a controller, which can have up to four ports and running the DDR memory at 133 MHz at 16 bit width (peak bandwidth is thus 2*133*2=532 Mbytes/s) The memory sits on a module with a 1000 kgate Spartan-3 together with 8 MB of flash for FPGA and other data. It can run linux. Write me personally if you'd like more info. What's your application anyway - you might be better off implementing any heavy duty algorithms using the fpga-logic itself, if you know what I mean. Regards, Finn Nielsen remove.prefix.finnstadel@tiscali.dkArticle: 86415
thanks Peter.. how can I run seperate standalone programs on both PPC's simultaneously, especially how can I download onto target. I couldnt do "update bitstream" as it says elf file associated with PPC_1 should be downloaded thorugh jtag..via gdb. only 1 program which is asscocited with ppc_0 am able to update with bitsream. thank you for your time and consideration, regards JagguArticle: 86416
Hi Andy, my vsync period would be 1066 then. So the vsync has no front-porch but a lot back-porch, okay. Maybe I will try to verify it with an oszilloscope some time. Thanks for Your answer. regards, BenjaminArticle: 86417
Piotr, To complement Antti's comments, I am working with Xilinx for about 7 years, used it everywhere and despite some minor software issues (they always have been resolved, but who said there is such a complex design with no bugs???) I have always been happy. I have used Altera Cyclone a few times & even now we have some diversity by using C2 for some minor project. I have also tried evaluating Lattice and Actel. In the past, we had to choose FPGAs for ASIC prototyping and I was greatly disappointed by Altera tools Yes, Altera tools, but not FPGAs! FPGAs are always extraordinary stuff, regardless of their vendor! And this is the main reason we tried using Xilinx and occasionaly Altera Cyclone for small things. But the latest Quartus 4.2 / 5.x is definitely better than before, yet it still has some completely useless warnings. If you are not planning massive production and you need only a few units, think no further. Go with Xilinx, forget about the rest. Softwarte quality is good, large support community, prices "above the average"... Vladislav "Piotr Wyderski" <wyderskiREMOVE@ii.uni.wroc.pl> wrote in message news:d9pkb0$oas$1@panorama.wcss.wroc.pl... > Hello, > > I would like to build a 1GBit/s data encryptor/decryptor using > an FPGA chip, but I have a big problem with an appropriate > chip. It should contain about 3000LE, 70 IO pins and at least > 12 dual-port RAM blocks (I need two read ports per block) configurable > as 512x8 banks. Additionally, it should be Flash-based or SRAM > -based with encrypted bitstream. And must be cheap. Here are the > options I know of: > > 1. Altera Cyclone 1C3-8. It is perfectly suited for my needs, > and is very cheap. There is extremely good design software > available. But it can't be used: it's totally unsecure. > > 2. Actel ProASIC+: flash-based and cheap, but their > memory blocks are not big enough and have only > one read port. Terribly bad software quality. > > 3. Actel ProASIC3: good, moderately expensive but not yet available. > > 4. Lattice XP: unknown price, unknown availability, technically suitable, > I've heard many bad opinions about Lattice, but personally I have no > experience with their chips. > > 5. Xilinx Virtex4: too powerful and thus probably much too expensive, > good availability, very good support [:-)], software quality unknown, > but probably comparable to Quartus. > > Could you please write something about remaining options? > > Best regards > Piotr Wyderski > > >Article: 86418
In article <42c02bf8$0$167$bb624dac@diablo.uninet.ee>, valentin tihomirov <spam@abelectron.com> wrote: > >I see two issues here looking for the classical design practices. > >Consider a solution space scanner feeding a N-stage pipeline calculating the >quality of the solution. As soon as the quality appears satisfactory, we >stop the pipeline and show the solution to the user. How do you determine >which data was entering the pipeline just N steps earlier? Does it make sense to stick a circular buffer of size a bit bigger than N at the top of the pipeline, and tag each entry with a short quantity indicating where in the buffer its input is stored? Reduces the amount you have to carry down the pipeline, though the circular buffer is a small wide memory which might be inefficient in the FPGA world of large narrow memories. TomArticle: 86419
Michael, > I can't imagine how this core > can implement CRC error correction I doubt that it does or even tries to do CRC error correction. USB requires devices do CRC generation and checking. Devices do no correcting. They just flags errors if they occur. Marco ________________________ Marc Reinig UCO/Lick Observatory Laboratory for Adaptive OpticsArticle: 86420
Hi Dave, > Hi All, > I'm after a piece of FPGA or DSP-based hardware for the purposes of > performing real time video processing on digital high bandwidth video - > typically DVI 1280x1024 @ 60Hz. > Can anybody recommend a board (either PCI/PC or VME based) that would > have the appropriate DVI-like inputs and outputs and which would be likley > to have sufficient bandwidth to handle this resolution?. > > Any comments or suggestions are sgratefully receieved. Haven't got a clue on how much it will cost you, but how about http://www.mangodsp.com/bluejay-cpci.asp Sounds powerful enough... Best regards, BenArticle: 86421
Johnson Liuis wrote: >Thanks, Ray, > >Is it hard to design a down-converter and a up-converter inside of FPGA? I >do not have experience to design converters. Will the converters take lots >of resources of FPGA? > >Johnson > > > Depends on your experience I suppose. I don't think it is hard, but others may find that it is. The resource usage depends on several factors: What is the data rate? what is the required SNR? What is the bandwidth relative to the sample rate? How much out of band filtering is required? What is the ratio of the input and output sample rates? Perhaps the largest determinant is the filter architecture. Using a multi-rate architecture results in substantial area savings. -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 86422
Jedi, As I understand irt, the expiration of the Nios kit license does not prevent you from using Nios. You will not receive any additional subscription shipments, but you perpetually have a license to compile/ship a product with Nios using the last version that you received in your subscription. For Quartus the story is different; the dev kit edition of Quartus does expire. You might check Quartus web edition for continued use if you don't wish to renew your subscription. Jesse Kempa Altera jkempa -at- altera -dot- comArticle: 86423
>this core has been used. >but those who have have not committed the changes back to the openocores >repository I guess. Perhaps then we should start a list with possible problemes. >From my side related to the USB 2.0 core: - Wishbone Reset Input is low active - usb_vbus is inverted, 0 = device is attached to bus, 1 = deattached - here you can find: "The problem has to do with timeout values, for example in waiting for the data0 packet after a setup transaction. The core values are 622ns time out in full speed and 400ns in high speed. These values seem to time out too early, if i make them much bigger it does receive the data0 packet." "The timeouts are more like 1.3uS and 1.6uS. See sections 7.1.18 and 7.1.19 of the spec for specific bit time values." http://www.usb.org/phpbb/viewtopic.php?t=5550&highlight=opencores+++timeout (I haven't tried it yet, will do it tomorrow) MichaelArticle: 86424
>I doubt that it does or even tries to do CRC error correction. USB >requires devices do CRC generation and checking. Devices do no correcting. >They just flags errors if they occur. With CRC error correction I mean the whole thing, retransmit or discard received data with bad CRC. The 1.1 core uses external FIFOs for every endpoint. For an OUT endpoint you have the (output) signals data_out, write_strobe and (input) signal buffer_full in the entity of the core. Because I can't find any buffer memory inside the core that can hold 64 bytes (the max packed size I used) I assume that it sends every received byte from the USB directly to the external endpoint FIFO. But what if a CRC error is detected at the end? There is no way the core can delete the last packed he just send to the FIFO. So, if he doesn't send a handshake in reply to this bad packed he will geht it a second time and will probably write it to the FIFO again. If he send's an ACK then you still have a bad packed in your FIFO. The core has an CRC Error output signal (for all endpoints together) but I don't see how I can use it to avoid such problems... Michael
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z