Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On Tue, 17 Feb 2004 11:26:05 +0200, "valentin tihomirov" <valentin_NOSPAM_NOWORMS@abelectron.com> wrote: >FPGAs require an EEPROM; may be CPLD + dual-port RAM buffer is better for >the 100MHz interface than one FPGA? You have 2Mbyte of Flash on the AXIS controller (if it's the one I think it is), and that can be used to hold a compressed FPGA image along with your software. I expect the solution involving the FPGA will be cheaper than the one involving the CPLD. Regards, Allan.Article: 66351
lenz19@gmx.de (lenz) wrote in message news:<7f673150.0402171059.2cf41c85@posting.google.com>... > i tried to find some references for lossless compression > algorithms implemented in fpga. I found a lot about > bitstream compression but (nearly) nothing about fpga > implementations of lossless compression algorithms. > > Which lossless compression algorithms are suited for fpga > implementation. Since you mentioned GZIP, I will start with Lempel-Ziv (LZ) based compression schemes: Huang, Saxena and McCluskey "A Reliable Compressor on Reconfigurable Coprocessors" http://csdl.computer.org/comp/proceedings/fccm/2000/0871/00/08710249abs.htm Jung and Burleson. "Efficient VLSI for Lempel-Ziv Compression in Wireless Data Communication Networks" http://citeseer.nj.nec.com/359751.html It should be noted that the Systolic Array implementatio described in the papers would violate the Stac patents. Basically Stac has patented the notion of using a shift register to store the history buffer. Though I personally believe the concept of storing the dictionary in a shift register to be rather obvious... we have opted not to use a shift register implementation so as to not risk litigation. We are in the process of completing a LZ based compression core that is very similar to LZS, but does not violate the patents. The performance in a Xilinx V2Pro is expected to be on the order of 100M bytes/second, with a 512 byte dictionary. We expect to release this core for Xilinx v2/v2pro/s3 and Altera Stratix by the end of March. These papers should atleast get you started. Large CAMs are not a particularly FPGA friendly structure, but similar structures can be effectively implemented. For decompression, sequential implementations will be just as fast as parallel ones, as on decompression only one location from the dictionary needs to be read per output. For compression, a parallel implementation will allow for processing up to 100M tokens per second. A sequential approach will be much slower. LZ compression is a very good approach for "generic" data, in areas such as storage or networking. Since you did not mention what sort of data you were going to compress, I will make brief mention of non-LZ approaches. IBM's ALDC is something worth studying to see an alternative approach to compression. As far as patents go... the adaptive nature of the algorithm is really quite novel, and dare I say mathematically quite cool. If it is images, one is much better off going with an algorithm that is tailored to this sort of data-set. For example LOCO/JPEG-LS for continuous tone images. We are wrapping up an implementation of this algorithm, and our first hardware implementation is running at 35M 12bit pixels/sec using ~65% of an XC2v1000-4. The challenge with this algorithm is in the pipelining, as each pixel prediction, error calculation, and statistics update, must be completed prior to processing the next pixel. We expect the final implementation to be quite a bit faster and slightly smaller. One of the challenges faced implementing compression algorithms in hardware is that the algorithms are most often specified in the standards documents in C-pseudocode. So implementation often requires translating a substantially serial implementation into a pipelineable design. The more significant challenge with implementing these algorithms in hardware is that each pixel/token, is completely dependent upon the error and updated statistics from the previous pixel/token. If you can identify what this minimal loop is, and put all other logic into pipelines before and after this loop, your performance will be limited solely by the minimal loop. Most of these algorithms look at minimizing the total compute required, rather than minimizing the complexity of this minimal loop. The former will dictate software performance, the later hardware, in optimal implementations. As an example JPEG-LS in near lossless mode is only marginal slower in software, but about 50% as fast when implented in hardware, each as compared to JPEG-LS in lossless mode. I hope these comments atleast offer a brief introduction. I also hope that my comments were not too commercial in nature, having mentioned two cores that we are getting ready to release. Regards, Erik Widding. --- Birger Engineering, Inc. -------------------------------- 617.695.9233 100 Boylston St #1070; Boston, MA 02116 -------- http://www.birger.comArticle: 66352
On Tue, 17 Feb 2004 12:01:49 +0200, "valentin tihomirov" <valentin_NOSPAM_NOWORMS@abelectron.com> wrote: > >> There's a large market for phone line connection devices, so it's not >> all that hard to find codecs designed for the task, e.g. AD73322L > >What is analog front-end? The datasheet gives idea, this is ADC/DAC device >with capability of mixing input and output audio. Datasheet states it can be >used in telephony applications indeed. Does it mean the chip can be used as >hybrid function (DAA) directly connecting to telephone line Tip and Ring >wires? Do we still need PLD to tranfer data to a powerful system for storing >or TCP/IP conversion? Thanks. In another post, you said: > Telephone line is that one where you connect your phone at home and office. Does this mean that you are trying to mimic a telephone, or a telephone exchange? The requirements are very different. You mention DAA, which implies that you are trying to mimic a telephone, but I suspect that you really meant SLIC, which is the interface to a phone line inside a phone exchange (or PABX, etc.). BTW, all of this is off-topic for comp.arch.fpga Regards, Allan.Article: 66353
Philip wrote: > >>OK, I give in. Is there some kind of ex-AMD >>conspiracy going on here? <snip> > > Yes, there is a conspiracy. Live with it. > For further evidence of this insidious conspiracy, consider this recent ebay listing: http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=2595421628&category=50913 " PGA Tutorial and Design Tools! " This is from 1988, during the time that AMD was in " the PGA/LCA business along with Xilinx BrianArticle: 66354
In comp.arch Martin Schoeberl <martin.schoeberl@chello.at> wrote: > > Is there a community that is actively involved in discussing and/or > > developing FPGA-based Forth chips, or more generally, stack > > machines? > > > > Tha Java Virtual Machine is stack based. There are some projects to > build a 'real' Java machine. You can find more information about a > solution in an FPGA (with VHDL source) at: http://www.jopdesign.com/ > > It is sucessfully implemented in Altera ACEX 1K50, Cyclone (EP1C6) and > Xilinx Spartan2. It would be intresting to see results for a version that cached the top of the stackand used a more realistic memory interface > > Martin > > > -- Sander +++ Out of cheese error +++Article: 66355
hi all, I have a counter running at 50 Mhz . Now i have to sample that counter at 77 Mhz. My question is can i sample the counter running at 50 mhz directly with 77 mhz clock or should i synchronize the 50 mhz counter to 77 mhz clock domain and then only sample it. what are the effects if i don't the sample the 50 Mhz counter and i directly sample with 77 Mhz. rgds, pravArticle: 66356
Hello My Question: Is it also possible to watch signals which are connected directly to IOBs in chipscope ? It seems to me that they are not available in the "ChipScope Analyser". Only internal signals are available (register imputs and outputs) Is that right? Or is it a mistake of mine. Yet I couldn't manage to display signals which come directly from a IO pad. TobiasArticle: 66357
Marius Vollmer <mvo@zagadka.de> wrote in message news:<87ad3hp6w2.fsf@zagadka.ping.de>... > Imagine you want to have an FPGA board that has a USB port and no > other connection (i.e., no other way to upload a bitstream). Can that > FPGA bootstrap itself over the USB port? > > There would be a 'boot' bitstream in some flash on the board and the > FPGA would be configured initially with that bitstream. The function > of that bitstream would be to make the FPGA listen on the USB port for > another bitstream that is then used to configure the FPGA for its real > function. > > Can this be done? Without external memory (other than the boot > flash)? YES. but it is a little bit dangerous as the FPGA would rewrite the main primary config and if the process is not succesful the system will be totally dead. Altera Cyclone: doable with no tricks. Atmel FPSLIC: use I2C port to repropgram the config memory appnote exists. Xilinx: connect config memory JTAG to FPGA antti www.openchip.orgArticle: 66358
I don't know what is SLIC and data protocols between PABX . The requirement is usual telephone line you have at home. DAAs are used to connect to the lines (can be replaced with a transformer). Analogue telephone interface is really off-topic but high-speed capabilities of PLDs are not.Article: 66359
I heard something about some major FPGA patents due to expire soon. I think these are owned by Xilinx and/or Altera. I would be interested if anybody can clarify what exactly these patents are, and if anyone has an opinion on if their expiry could potentially enable market entry by new entrants to the FPGA arena or new product families by the exisiting FPGA vendors. Regards, Paul.Article: 66360
Hello, lenz19@gmx.de (lenz) writes: > Which lossless compression algorithms are suited for fpga > implementation. If you know in advance information on the type of data (e.g. english language) then a Huffman Coding could be a quick and small solution. It could be e.g. easily done with Lookuptable. Florian -- int m,u,e=0;float l,_,I;main(){for(;1840-e;putchar((++e>907&&942>e?61-m:u) ["\t#*fg-pa.vwCh`lwp-e+#h`lwP##mbjqloE"]^3))for(u=_=l=0;79-(m=e%80)&& I*l+_*_<6&&26-++u;_=2*l*_+e/80*.09-1,l=I)I=l*l-_*_-2+m/27.;}Article: 66361
Hello, I have to design a universal PCI card according to PCI spec 2.2 with 33Mhz and 32 pins. However for several reasons I have to use a FPGA that is not 5V but only 3.3V tolerant. Since nearly all motherboards just offer 5V slots I have to make this card somehow 5V compliant. I thought of using a quickswith from IDT. Is this a good idea? What are the things I have to take care of when using this method? Are there any other ways to make my card 5V compliant? thanks+regards, NickyArticle: 66362
> Thanks Lars, I've implemented the reset count as discussed with > John earlier. This looks cleaner though so I'll try it tomorrow > and report back results. Hmm. I've tried using the ROC component as Lars suggested. The syntheis report doesn't give many clues as to what's going on, it says... "Generating a Black Box for component <ROC>" ... which might be expected, but rst or GSR aren't mentioned. The map report says ... The trimmed logic reported below is either: 1. part of a cycle 2. part of disabled logic 3. a side-effect of other trimmed logic The signal "roc_inst_1" is unused and has been removed. Optimized Block(s): TYPE BLOCK GND XST_GND VCC XST_VCC BUF roc_inst_1 BUF roc_inst_2 ...which might suggest that ROC has been replaced with something else (ie a connection to the GSR net) but it doesn't explicitly state this. Can anyone confirm that this means the ROC component has been removed and the rst net has been connected to GSR? If I can't get this confirmed I'll stick with the startup counter. Time for an experiment with a SpartanII I think. Nial.Article: 66363
Nial Stewart <nial@nialstewartdevelopments.co.uk> wrote: : "Symon" <symon_brewer@hotmail.com> wrote in message : news:c0rrgi$1b7t8e$1@ID-212844.news.uni-berlin.de... : > : > > So what do I need to do to get 'rst' connected to the GSR : > > net? : > > : > > I've spent a fair bit of time searching the Xilinx site/docs : > > and googling this group with no results. It seems to be one : > > of those things that I should probably know, but just can't : > > find anywhere. : > > : > > Thanks for any pointers, : > > : > > Nial. : > > : > Hi Nial, : > Can you instantiate the STARTBUF_SPARTAN3 design element? Listed under : > STARTBUF_architecture in the Design Elements section of the Libraries : guide. : > Cheers mate, Syms. : Symon, that doesn't do it. : The STARTUP_SPARTAN3 module allows you to drive the GSR net from : an user defined source but this reset mechanism isn't visible to : HDL so simulations won't work. : The STARTBUF_SPARTAN3 module does the same thing, but with an : output you can connect to your HDL reset lines which mirrors : the GSR net. Thus simulations should match real life. : This doesn't help me tie my top level 'rst' net to the GSR. : I've checked through my design and _all_ my asynch reset : declarations use this net with the correct polarity. have STARTUP_SPARTAN2 rst (.GSR(grst)); in the top level module Use grst where you need it: always @ (posedge rclk or posedge grst) if (grst) rdo_cnt <= 20'h0; else if (!rdo_rr) rdo_cnt <= rdolen; else if ( rdo_rr) rdo_cnt <= rdo_cnt-1; Drive the reset in your test fixure Bye -- Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------Article: 66364
"Uwe Bonnes" <bon@elektron.ikp.physik.tu-darmstadt.de> wrote in message news:c0vllm$d35$1@news.tu-darmstadt.de... > : This doesn't help me tie my top level 'rst' net to the GSR. > : I've checked through my design and _all_ my asynch reset > : declarations use this net with the correct polarity. > > have > STARTUP_SPARTAN2 rst (.GSR(grst)); > > in the top level module > > Use grst where you need it: > always @ (posedge rclk or posedge grst) > if (grst) > rdo_cnt <= 20'h0; > else if (!rdo_rr) > rdo_cnt <= rdolen; > else if ( rdo_rr) > rdo_cnt <= rdo_cnt-1; > > Drive the reset in your test fixure I can't drive it, it's not a top level port. When I described 'rst' as a top level net, it's not actually a port on the top level design. The chip I'm designing for doesn't have an external reset input pin. My understanding is that STARTUP_SPARTAN2/3 is used to allow a top level reset input to drive the GSR net, but this isn't what I need. I want the synthesis tool to drive my rst with the GSR net. NialArticle: 66365
ROC is a place holder and simulation primitive. It should appear in your edif netlist, then the xilinx mapper removes it and connects the net to GSR. It is doing what you want. You can check the xilinx results in FPGA editor to convince yourself. Nial Stewart wrote: > > Thanks Lars, I've implemented the reset count as discussed with > > John earlier. This looks cleaner though so I'll try it tomorrow > > and report back results. > > Hmm. > > I've tried using the ROC component as Lars suggested. The syntheis > report doesn't give many clues as to what's going on, it says... > "Generating a Black Box for component <ROC>" > > ... which might be expected, but rst or GSR aren't mentioned. > > The map report says ... > > The trimmed logic reported below is either: > 1. part of a cycle > 2. part of disabled logic > 3. a side-effect of other trimmed logic > > The signal "roc_inst_1" is unused and has been removed. > > Optimized Block(s): > TYPE BLOCK > GND XST_GND > VCC XST_VCC > BUF roc_inst_1 > BUF roc_inst_2 > > ...which might suggest that ROC has been replaced with > something else (ie a connection to the GSR net) but > it doesn't explicitly state this. > > Can anyone confirm that this means the ROC component has been > removed and the rst net has been connected to GSR? > > If I can't get this confirmed I'll stick with the startup > counter. > > Time for an experiment with a SpartanII I think. > > Nial. -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 66366
> I don't know what is SLIC and data protocols between PABX . The requirement > is usual telephone line you have at home. DAAs are used to connect to the > lines (can be replaced with a transformer). Analogue telephone interface is > really off-topic but high-speed capabilities of PLDs are not. Your going to have a hell of a time with this. PS: A DAA (Data Access Arragment) is *not* a SLIC Subscriber Line Interface Circuit).Article: 66367
X-Complaints-To: abuse@supernews.com Lines: 122 Xref: newsmst01.news.prodigy.com comp.arch.fpga:68567 Hi there, I have attached a posting from linuxppc-embedded which offers a little clarification and asks for those with similar experiences to respond. Cheers, Jon. Jon Masters wrote: > Offset in r8 is broken and I am not sure yet whether this is purely a > hardware corruption of the kernel stack or a subtle kernel bug. Ho hum... *whistles to self quietly*. Let me clarify stuff. I am working on an internal project based on my own Virtex II Pro port and firmware (no redboot or ppcboot) which runs on the Insight Memec rev 3. V2PFG456 board - for those not familiar then this basically is an FPGA with an IBM 405D processor inside. Something similar to Mind. Now the problem I am seeing for those who are interested or might have seen similar problems with EDK6.1 generated hardware: The port has been working fine running busybox, webservers, a userland based upon ptxdist and so on et al. This is based upon a hardware generated using Xilinx EDK3.2 with some custom modules for ethernet and in house hardware stuff. The system boots from SystemACE etc. An upgrade to Xilinx EDK6.1 resulted in generated hardware which can no longer boot a stable Linux environment which does not fall over randomly in strange and non-deterministic ways which makes debugging difficult. The Xilinx XMD/EDK software slightly compounds the debugging anyway. Suspicion lead me to disable the cacheing on the 405D as it was and is still suspected that there is a problem with the hardware (currently I am using an automatically generated really simple cut down hardware from EDK6.1 using its autogeneration tool so that if/when I find this fault I can make various ascertions about where it might lie). Recent posting suggesting a problem with mmap was down to me however (see below) but I am hopefully now back on track with cacheing properly off so I can continue to look for the original problem again. Cheers, Jon. --- Red faced explanation of mmap r8 issue mentioned --- Seems I shoved in something along the lines of: li r8,0 /* Load zero */ iccci r8,r8 dccci r8,r8 This happens during the SystemCall execution path because I am running with cacheing disabled and I am paranoid so want to force the caches to be flushed even though I have explicitly modified the iccr and dccr and page protection bits in _PAGE_BASE to not use cacheing for kernel or userland ID page frames (but not enough to have seen that I was trashing the register for mmap). Anyway this means that finally I might be running as per my original posting and can get back to looking for a potential memory controller problem.Article: 66368
Sander Vesik <sander@haldjas.folklore.ee> wrote in message news:<1077074083.882919@haldjas.folklore.ee>... > In comp.arch Martin Schoeberl <martin.schoeberl@chello.at> wrote: > > > Is there a community that is actively involved in discussing and/or > > > developing FPGA-based Forth chips, or more generally, stack > > > machines? > > > > > > > Tha Java Virtual Machine is stack based. There are some projects to > > build a 'real' Java machine. You can find more information about a > > solution in an FPGA (with VHDL source) at: http://www.jopdesign.com/ > > > > It is sucessfully implemented in Altera ACEX 1K50, Cyclone (EP1C6) and > > Xilinx Spartan2. > > It would be intresting to see results for a version that cached the > top of the stackand used a more realistic memory interface > Like http://www.ajile.com/products.htm ?Article: 66369
Nial Stewart <nial@nialstewartdevelopments.co.uk> wrote: : "Uwe Bonnes" <bon@elektron.ikp.physik.tu-darmstadt.de> wrote : in message news:c0vllm$d35$1@news.tu-darmstadt.de... : > : This doesn't help me tie my top level 'rst' net to the GSR. : > : I've checked through my design and _all_ my asynch reset : > : declarations use this net with the correct polarity. : > : > have : > STARTUP_SPARTAN2 rst (.GSR(grst)); : > : > in the top level module : > : > Use grst where you need it: : > always @ (posedge rclk or posedge grst) : > if (grst) : > rdo_cnt <= 20'h0; : > else if (!rdo_rr) : > rdo_cnt <= rdolen; : > else if ( rdo_rr) : > rdo_cnt <= rdo_cnt-1; : > : > Drive the reset in your test fixure : I can't drive it, it's not a top level port. : When I described 'rst' as a top level net, it's not : actually a port on the top level design. The chip : I'm designing for doesn't have an external reset : input pin. : My understanding is that STARTUP_SPARTAN2/3 is : used to allow a top level reset input to drive : the GSR net, but this isn't what I need. Any logic signal can drive the .GSR input of the STARTUP_SPARTANX. You can generate it internally or connect to an external pin. Where's the problem? Bye -- Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------Article: 66370
> Any logic signal can drive the .GSR input of the STARTUP_SPARTANX. You can > generate it internally or connect to an external pin. Where's the problem? I don't want to have to drive it. GSR is driven as part of the power up process and I want this to drive my reset net. Previously using Leonardo I've been able to have a reset net declared as a signal with some directives to tell Leonardo to connect this net to the GSR. I was hoping that XST would do the same thing, but it doesn't seem to. As I said elsewhere in the thread Simplify has a directive "xc_isgsr" which looks like it's doing this, but there's no equivalent XST dierective listed. Nial.Article: 66371
praveenkn123@yahoo.com (prav) wrote in message news:<863df22b.0402172352.5886ed1c@posting.google.com>... > hi all, > > I have a counter running at 50 Mhz . Now i have to sample that counter > at 77 Mhz. > > My question is can i sample the counter running at 50 mhz directly > with 77 mhz clock or should i synchronize the 50 mhz counter to 77 mhz > clock domain and then only sample it. > > what are the effects if i don't the sample the 50 Mhz counter and i > directly sample with 77 Mhz. > > rgds, > prav Hi Prav, what do you mean with 'sample the counter'? Can you explain what you mean more in detail? Rgds AndreArticle: 66372
Hello A collegue and I are trying to implement a 256 point FFT on a Virtex-II. The problem we are having is finding a way to transfer the calculated FFT data off the FPGA so we can display it (preferably in Matlab). If anyone knows of a tool for uploading/downloading to/from the block RAM on the Virtex-II, please let us know. If anyone has any information on how to interface to the 16Mb flash memory card on our board, it would be greatly apreciated. Finally, any tip on how to implement this project would be helpful. We are at our wits end trying to figure out a way of displaying our FFT function and testing its accuracy. Anythign anyone can tell us is apreciated! Thanks! John Plows & Andrew Dalrymple Royal Military College of CanadaArticle: 66373
Tobias Möglich <Tobias.Moeglich@gmx.net> wrote in message news:<40332808.11471D5C@gmx.net>... > Hello > > My Question: > Is it also possible to watch signals which are connected directly to > IOBs in chipscope ? Not directly... > It seems to me that they are not available in the "ChipScope Analyser". > Only internal signals are available (register imputs and outputs) > Is that right? Or is it a mistake of mine. Yet I couldn't manage to > display signals which come directly from a IO pad. ChipScope uses internal blockRAM, so it needs signals available to the chip internals. To look at signals within an IOB that are not already brought into the internals is not possible. You can, however look at buffered copies of input signals for example by placing an IBUF or IFD to bring the signal into the chip. If you really want to look at the pad itself, you'll need an external analyser. > > TobiasArticle: 66374
"John Plows" <s22838 at rmc dot ca> wrote in message news:403378f2@news.kos.net... > Hello > > A collegue and I are trying to implement a 256 point FFT on a Virtex-II. > The problem we are having is finding a way to transfer the calculated FFT > data off the FPGA so we can display it (preferably in Matlab). > > If anyone knows of a tool for uploading/downloading to/from the block > RAM on the Virtex-II, please let us know. > If anyone has any information on how to interface to the 16Mb flash > memory card on our board, it would be greatly apreciated. > Finally, any tip on how to implement this project would be helpful. We > are at our wits end trying to figure out a way of displaying our FFT > function and testing its accuracy. > > Anythign anyone can tell us is apreciated! Thanks! > > John Plows & Andrew Dalrymple > Royal Military College of Canada > Do you have any sort of processor on the fpga? If not, perhaps you could put in a small one (picoblaze) and a uart, and pass the data out through a serial port.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z