Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
?? I wonder if there is any reason why it would be useful to compile the verilog for a FPGA? Austin Pablo Bleyer Kocik wrote: > For those who are interested, SUN released Open SPARC today: > > http://opensparc-t1.sunsource.net/download_hw.html > > Verilog RTL, verification and simulation tools included. > > Cheers. > > -- > PabloBleyerKocik /"Person who say it cannot be done > pablo / should not interrupt person doing it." > @bleyer.org / -- Chinese proverb >Article: 99326
Austin Lesea wrote: > ?? > > I wonder if there is any reason why it would be useful to compile the > verilog for a FPGA? > > Austin > > Pablo Bleyer Kocik wrote: > > > For those who are interested, SUN released Open SPARC today: > > > > http://opensparc-t1.sunsource.net/download_hw.html > > > > Verilog RTL, verification and simulation tools included. > > > > Cheers. > > > > -- > > PabloBleyerKocik /"Person who say it cannot be done > > pablo / should not interrupt person doing it." > > @bleyer.org / -- Chinese proverb > > I can imagine no practical use. But it sure is fun to do :). -IsaacArticle: 99327
Errr... To start developing and testing a SoC based on OpenSPARC interfaced to a custom digital block in an FPGA? CPU cores + FPGA blocks seem to be resurrecting now. Also, a bunch of companies are working strongly on FPAAs and other analog configurable architectures (this time done right). If we have 8051/PSoC/ARM7/PowerPC embedded cores now, why we can't dream of having devices based on a state of the art and truly open platform (GPL) in the next years? And differently from the other proprietary solutions, anyone can share ownership and help in the development of OpenSPARC... -- PabloBleyerKocik /"But what... is it good for?" pablo / -- 1968 Engineer at IBM's Advanced Computing @bleyer.org / Systems Division, commenting on the microchipArticle: 99328
Thanks for the pointers. I will try that. Cheers.Article: 99329
Jim Granville wrote: > dp wrote: > >> While I agree with you that it is outdated and too slow, >> I'd say some single chip USB 2.0 <-> much-faster-than-todays-JTAG >> would be a practical enough solution. It will take care of the level >> conversion and everything, and the speed will be as high as it >> gets. Need more speed for too big a board, put several JTAG chains >> on it, ready. >> >> I do not understand what you mean by "the 5 MHz clock dies out after", >> what's wrong with buffering it? But 5 MHz is too slow for todays big >> chips anyway, so the point is valid nonetheless . > > <snip> > > JTAG itself is OK, it is the implementations that are sometimes 'left > till last'. > > Speed-sag is solved with TinyLogic buffers, as dp suggests. > > Seems there are two paths : > a) Start a committee as Neil suggests (no smiley seen?) > > b) Start an openCore project, that defines a CPLD fast JTAG interface, > to either a Parallel port, or a FTDI device, or a Cypress USB uC etc > This would have a BAUD select, and have the ability to run multiple JTAG > stubs - ie if Chain is broken ( seems to be common ) then run a star > structure. > > -jg > For an ready-to-use USB to JTAG solution based on FTDI FT2232 have a look at: http://www.amontec.com/jtagkey.shtml http://www.ftdichip.com/Products/EvaluationKits/3rdPartyKits.htm The Amontec JTAGkey is certainly what you are searching for building custom JTAG application over USB in the best time and for the best price. IO voltage in a wide range : 1.4V to 5V JTAG freq: from 1Hz to 6Mhz LaurentArticle: 99330
Weng Tianxiang wrote: > Hi Pablo, > Thank you for your useful information. > > Weng The problem is "System Requirements": "SPARC CPU based system"Article: 99331
"Raymond" <raybakk@yahoo.no> wrote in message news:1143058869.018478.68900@g10g2000cwb.googlegroups.com... >I am trying to make a webserver on my ML401 development card and I use > BSB to generate my hardware. > > When I try to "Generate libraries and BSPs" I get the error: > //////////////////////////////////// > ERROR:MDT - ERROR FROM TCL:- lwip () - child process exited abnormally > while executing > "exec bash -c "cd src;make all \"COMPILER_FLAGS=$compiler_flags\" > \"EXTRA_COMPILER_FLAGS=$extra_compiler_flags\" >& logs"" > (procedure "::sw_lwip_v2_00_a::execs_generate" line 51) > invoked from within > "::sw_lwip_v2_00_a::execs_generate 40121220" > ////////////////////////////////////// > > My version on EDK is 8.101 and ISE is 8.102 > > My Ethernet component looks like this in my MHS FILE: > ///////////////////////////////// > BEGIN opb_ethernet > PARAMETER INSTANCE = Ethernet_MAC > PARAMETER HW_VER = 1.02.a > PARAMETER C_DMA_PRESENT = 1 > PARAMETER C_IPIF_RDFIFO_DEPTH = 32768 > PARAMETER C_IPIF_WRFIFO_DEPTH = 32768 > PARAMETER C_OPB_CLK_PERIOD_PS = 10000 > PARAMETER C_BASEADDR = 0x40c00000 > PARAMETER C_HIGHADDR = 0x40c0ffff > BUS_INTERFACE SOPB = mb_opb > PORT OPB_Clk = sys_clk_s > PORT PHY_rst_n = fpga_0_Ethernet_MAC_PHY_rst_n > PORT PHY_crs = fpga_0_Ethernet_MAC_PHY_crs > PORT PHY_col = fpga_0_Ethernet_MAC_PHY_col > PORT PHY_tx_data = fpga_0_Ethernet_MAC_PHY_tx_data > PORT PHY_tx_en = fpga_0_Ethernet_MAC_PHY_tx_en > PORT PHY_tx_clk = fpga_0_Ethernet_MAC_PHY_tx_clk > PORT PHY_tx_er = fpga_0_Ethernet_MAC_PHY_tx_er > PORT PHY_rx_er = fpga_0_Ethernet_MAC_PHY_rx_er > PORT PHY_rx_clk = fpga_0_Ethernet_MAC_PHY_rx_clk > PORT PHY_dv = fpga_0_Ethernet_MAC_PHY_dv > PORT PHY_rx_data = fpga_0_Ethernet_MAC_PHY_rx_data > PORT PHY_Mii_clk = fpga_0_Ethernet_MAC_PHY_Mii_clk > PORT PHY_Mii_data = fpga_0_Ethernet_MAC_PHY_Mii_data > END > /////////////////////////////////////////////// > > Is there anyone that can have an idea what might be the problem? > > Raymond > It should be a path trouble. The compiler doesn't find the files. I had it in a custom version of lwip: 1.01.a But if you use lwip 2.00.a with eternet or ethernet lite core it should not appear. Check lwip configuration into: Software Platform Setting OS and Libraries expand lwip tree MarcoArticle: 99332
Hi Robin, "Robin Bruce" <robin.bruce@gmail.com> wrote in message news:1142947226.866898.178060@e56g2000cwe.googlegroups.com... > Hi guys, > I was wondering if anyone here knows the technique that the Xilinx > floating-point square-root core employs to get its results. > I ask because we've got a unit that does the same job with near-identical > latency and clock speed but uses far less resource (330 slices, versus > Xilinx's 624 slices for single-precision accuracy). I expect that it is very similar to the algorithm your own unit is using. I don't think I can go into details though. As you will see from the product datasheet, this core is based on IP that Xilinx licensed from QinetiQ (UK) Ltd. We have been working on improvements since we brought this in-house, but the focus has been on the more "mainstream" floating-point operators (e.g. add, multiply). So it's good to know that there is at least some interest in this square-root core too! To cut a long story short: we know where these inefficiencies are, and our core should indeed be a fair bit smaller when fully pipelined than it currently is. The engineering team is aware of this so there should be a further-optimized version available in a future IP release (watch this space). Incidentally, how come you have so many square-roots to do? We racked our brains and couldn't think of a "killer app" that would require a ~300MHz fully pipelined square-root unit. Perhaps this use case is more common than we thought? Thanks a lot for your feedback in any case. Cheers, -Ben- (@xilinx)Article: 99333
Hello all, How do I set false-path constraints that follows the synthesis (Synplify 8.x) to PAR (Actel Designer) in Actel Libero? I have named all my false paths to xxx_falsepath, then created two SDC files with define_false_path -to {{*_falsepath}} # for synplify and set_false_path -through {*_falsepath} # for designer but none seem to work. Ideally, i would like to set a single (or maybe two) attribute in the VHDL code and have it follow the design all the way down to PAR, is that at all possible? regards, -BurnsArticle: 99334
Hi John, "johnp" <johnp3+nospam@probo.com> wrote in message news:1143059666.200384.228940@t31g2000cwb.googlegroups.com... > Can someone explain the difference between the Xilinx shift_extract > and shreg_extract constraints? > > Shreg_extract appears to prevent inferring SRL16 based shifters, > what does shift_extract do? The SHIFT_EXTRACT attribute/constraint prevents (or not) XST from extracting logical shift operations during synthesis. That is, if your design contains a parallel shift left/right of a bus by some dynamic distance (a "barrel shifter"), XST can try to do something clever with it (rather than just leaving it as random logic). It doesn't have anything to do with SRL16s. Cheers, -Ben-Article: 99335
Hello All! We're working with a Xilinx Virtex II Pro board. As a part of our project, we had to write a hardware stack. After having made it work, we thought of optimizing the design and hence removed a few states and reduced the no. of states from 8 to 4. The older code was getting synthesized in around 20 mins, but the new code takes hours together to get synthesized, and so does the PAR. How can we reduce the synthesis time? Why is that the code which took lesser time to get synthesized is now taking longer? -- DotNetters.Article: 99336
Hi all: I have added a page to the MyHDL Cookbook, about a stopwatch design similar to the design from the Xilinx ISE tutorial: http://myhdl.jandecaluwe.com/doku.php/cookbook:stopwatch The design is tackled in "the MyHDL way". The page describes the whole flow, including unit testing, automatic conversion to Verilog, and FPGA synthesis results. MyHDL is a Python package that turns Python into a HDL. Best regards, Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Losbergenlaan 16, B-3010 Leuven, Belgium From Python to silicon: http://myhdl.jandecaluwe.comArticle: 99337
Hi all: I have added a page to the MyHDL Cookbook, about a stopwatch design similar to the design from the Xilinx ISE tutorial: http://myhdl.jandecaluwe.com/doku.php/cookbook:stopwatch The design is tackled in "the MyHDL way". The page describes the whole flow, including unit testing, automatic conversion to Verilog, and FPGA synthesis results. MyHDL is a Python package that turns Python into a HDL. Best regards, Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Losbergenlaan 16, B-3010 Leuven, Belgium From Python to silicon: http://myhdl.jandecaluwe.comArticle: 99338
Hi Ben, >From asking around my guess would be that you guys are doing some kind of SRT division related technique. I'm about a million miles away from being an expert on this, but it was my understanding that this technique was best suited to the case where a division unit and a square root unit are sharing resources. <bold> I might be wrong on that though <\bold>. The technique one of my esteemed colleagues settled on was derived from Napier's bones. From what I can tell so far the technique performs well in terms of clock frequency and resource use in comparison to other techniques being deployed. It has a similar latency to the core that you guys have developed, but I understand now that there are other techniques out there that can reduce latency by calculating two bits of the result per cycle. I imagine this would reduce the clock frequency possible when you're fully-pipelined. Not to take anything away from what you guys have done though, when you've made such an effort to allow the cores to be customisable to the degree that they are, then it's perfectly understandable that they're not fully optimised in every way :) The interest I've got in the square root core is from the perspective of wanting to provide a design environment to HPC programmers who want to use FPGAs to get serious computational speed-up. For that I need fully pipelined floating-point cores for single and double precision. These floating point cores would then be linked together to implement specific HPC functions that are maximally pipelined. Given that data sets are going to have to be big to justify the use of FPGAs, and that the functions will be pipelined, the cores' main qualities should be, in order of importance: 1) High Max Clock Frequency - 2) Low Resource Use 3) Low Latency (A distant third) Latency isn't so much of a concern, as for a pipelined HPC function with a large data set, the contribution of the core latencies to total computation time should be such a tiny fraction as to make it insignificant in a lot of cases. Anyway, I think I've rambled on a bit. The point is to say that I don't know what the application is either just yet, but I'm taking a "Build it and they will come" style approach. How much are you guys considering the emerging Reconfigurable HPC crowd? I would have thought that given your experience with the floating-point cores it would be right up your street. Cheers, Robin (@Nallatech)Article: 99339
Hi Robin, > >From asking around my guess would be that you guys are doing some kind > of SRT division related technique. I'm about a million miles away from > being an expert on this Same here. The safest thing for me to do is to say as little as possible. :-) We've looked at quite a few algorithms for division and square-root operations, and how they map to the FPGA fabric. Mainly our conclusion at the moment is that more exotic approaches using hard multipliers don't tend to buy you anything in area or speed terms, and can often make the latency *worse* (because of the pipelining requirements). We do endeavour to keep our eyes open, though. > Not to take anything away from what you guys have done though, when > you've made such an effort to allow the cores to be customisable to the > degree that they are, then it's perfectly understandable that they're > not fully optimised in every way :) Yes, there's a lot of configurability in there. The variable data rate/pipeline depth is the most complex feature (allowing you to choose between fully pipelined and fully sequential operation, or various options in between). The main issue right now is that SRL16 pipes are sometimes not being used when they should be. > The interest I've got in the square root core is from the perspective > of wanting to provide a design environment to HPC programmers who want > to use FPGAs to get serious computational speed-up. ... Given that data > sets are going to have to be big to justify the use of FPGAs, and that > the functions will be pipelined, the cores' main qualities should be, > in order of importance: > 1) High Max Clock Frequency - > 2) Low Resource Use > 3) Low Latency (A distant third) Interesting that power consumption doesn't figure at all (probably makes #17, just below "looks attractive when viewed in FPGA editor" :-)). > How much are you guys considering the emerging Reconfigurable HPC > crowd? I would have thought that given your experience with the > floating-point cores it would be right up your street. There's certainly a lot going on in this arena: most of the "early adopters" of FP on FPGAs seem to be attacking just these sorts of problems. You'll know all about the Edinburgh Parallel Computing Centre's "FPGA supercomputer" undertaking, of course (since our guys talk to your guys about it on a regular basis.) Of course, now and then someone starts complaining that they're not seeing any revenue from these sorts of projects... but we try not to let that stop us! :) Cheers, -Ben-Article: 99340
Brannon wrote: > 1. Support for a lot of chips, say 2048 of them. Have you considered an approach like the National SCANSTA112? Henk www.mediatronix.comArticle: 99341
Jim Granville schrieb: > JTAG itself is OK, it is the implementations that are sometimes 'left > till last'. First, I would like to separate two issues, which I believe have no interdependencies with each other: - the protocol - the electrical implementation _protocol_ I believe the protocol is quite ok. The TAP state machine, the commands, the way pins or internal bits are controlled are all reasonable. So software-wise, I believe JTAG is here to stay. One of the interesting observations here is, that most APIs controlling JTAG devices are *not* based on the pure essence of the protocol, i.e. the SVF commands. Instead most APIs are using a bit-bang paradigm, which makes it impossible to make high-level optimizations. _electrical implementation_ The electrical implementation might be outdated and not reflect the current state-of-art. Even if this is the case (I am not sure about this), I still doubt that everybody out there really wants to implement more advanced protocols, based on whatever standard (I read Ethernet in previous posts). In my impression, the most critical question is not JTAG itself, but the clumsy interface between JTAG and typical PCs. Parallel Cables are really not the way to do things any more. USB requires a proper protocol, as read-backs have to be avoided as much as possible (which is sometimes impossible due to poor software interfaces). This all looks very similar to what the music industry did to the ancient MIDI protocol. While the five-pin cable is almost gone now, the protocol itself is still alive: in PCI, USB and FireWire implementations. _suggestions_ I am dreaming for a few years now, that Xilinx would open and document Impact's JTAG API. This would allow everybody (especially 3rd party eval board manufacturers) to plug in their own electrical implementations (e.g. USB-based), while sticking to the JTAG software paradigm. Going from there, maybe some open source project might start, implementing a standardized and powerful JTAG interface (plus Impact drivers), based on USB, Ethernet or whatever feels appropriate. Any comments highly welcome, best regards Felix -- Dipl.-Ing. Felix Bertram http://homepage.mac.com/f.bertramArticle: 99342
Cheers ben, Well, it's true that power consumption isn't really keeping me up at night. I suppose it's because I'm envisaging this stuff in a workstation style environment. No matter how much the power the FPGA is consuming, I know it's still going to look good compared to the 100W+ commodity microprocessors consume. Well, this has certainly been a model discussion for comp.arch.fpga! No flaming, no rattles being thrown out of prams and no interminable arguments about standards of English ;-) Cheers, RobinArticle: 99343
Hi Marco. And thanks for answering :) I can't find any path options here? (It maybe that I overlook something). What should I look for in the configuration settings? Is it possible to download a new version of LwIP (like a custom LwIP) and use that in stead? By the way, I use lwip 2.00.a RaymondArticle: 99344
"bachimanchi@gmail.com" <bachimanchi@gmail.com> writes: > /opt3/synopsys/v-2003.12-sp1/pce/sparcoS5/syn/bin//design_analyzer: > line 10: 32 > 50 Segmentation Fault (core dumped) ${exec_name} -r > ${synopsys_root} "$@" > > does anyone have any idea over these kind of errors Try to contact Synopsys. If you are on a maintenance contract you should be able to get an upgrade. Petter -- A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?Article: 99345
Symon schrieb: >>1. Support for a lot of chips, say 2048 of them. JTAG supposedly >>supports 16 chips. Yeah, right. The 5MHz clock signal dies out after >>three or four. The 200KHz signal dies after eight or nine. This will >>require some strong signals with error correction, but, heck, if a >>basic ethernet layer can do it.... >> > > Hi Brannon, > I agree with much of what you write. > As a workaround for your clocking problems, you could try source terminating > the clock driver from your JTAG controller. On my platform cable USB I use a HMMMM??? SOURCE-Termination for a multidrop CLOCK signal? It may work, but more due to murphys law than by design . . . AC-termination at the end of a clock line is most probably the better way to go. And don't forget, those parallel port often enough spit out really ugly signals. So a buffer for the clock line with a schmitt trigger input and a RC-filter in front pays off. Regards FalkArticle: 99346
my design using asynchronization FIFO simulated successfully in system generator, but when i put the design's ngc file into a black box and use modelsim to co-simulate it, the outputs are all 'U'. why?Article: 99347
I've used this following path to get an .eps file. [print rtl] -> .ps -> [ps2ai] -> .ai -> [import in Mayura Draw] -> [edit and export] -> .eps Information of ps2ai is available on http://www.mayura.com/ps2ai.htm Another way: .ps -> [ps2pdf] -> .pdf -> [Adobe reader to print view] -> .ps/.prn -> [pstoeps] -> .eps Adobe PDF reader has a print mode that you can use to print only what are viewing on the screen. You can google for tools like pstoeps and ps2pdf.Article: 99348
Hi Jon, Generally speaking, the LatticeSC compares well with the S2GX and (when available) the V4FX. Though, the MACO blocks on the SCM parts are advantageous over Xilinx and Altera. They provide hard coded functions like a 1GE/10GE MAC, DDR1&2/QDR memory controller, full SPI4.2 interface. All these functions need to be implemented in the FPGA fabric in V4FX and S2GX (consuming a lot of LUTs). The number of I/O's is comparable, but the I/O speed is well above. So you could think that the LatticeSC is superior of V4FX and S2GX. Of course this is my personal opinion, and I can imagine that people in this forum will have other opinions. Regards, Luc On Wed, 22 Mar 2006 08:22:43 -0600, "maxascent" <maxascent@yahoo.co.uk> wrote: >Hi > >I was hoping to get some opinions on Lattice FPGAs compared to Xilinx and >Altera. I see they have a SC device out. How does this compare to similar >devices from the other two? > >Cheers > >JonArticle: 99349
Can you indicate me such a SERDES. It's perfect if its output is 64bits. Also it shoulden't do FEC stuff. FEC is planned to be done on FPGA. Thnaks in advance Mehdi Allan Herriman wrote: > On 21 Mar 2006 09:08:21 -0800, "Alain" <no_spa2005@yahoo.fr> wrote: > > >Unfortunately, even OC-192 is excluded form Virtex- 4 (ug076.pdf : > >"Payload compatible only"), so no hope for OTU-2 I think. > >We have to wait Virtex-5 family ? > > No. That is unlikely to have sufficient jitter performance, due to > certain compromises that must be made when putting an MGT on an FPGA. > In particular, it's likely to use a ring oscillator rather than an LC > oscillator which would have better perfomance. > > Use an external SERDES designed for G.707 / G.709 work. > > Note that (before they discontinued it) Xilinx's standalone SERDES > didn't meet the SONET jitter requirements either, so getting these > things to work is clearly not a trivial task. > > Regards, > Allan
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z