Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
"Michael Wilspang" <michael@wilspang.dk> schrieb im Newsbeitrag news:3e280bd9$0$71705$edfadb0f@dread11.news.tele.dk... > Are it possible to boot Spartan IIE in "Slave Serial Mode" directly > from an microcontroller with SPI interface (master mode) ? Yes, just connect PROGRAM and DONE to free IOs, CCLK and Din to the SPI IOs and you have all you need. -- MfG FalkArticle: 51626
"Richard Coster" <richard.coster@4rf.com> schrieb im Newsbeitrag news:732c59cc.0301161344.737576f5@posting.google.com... > The DLL is fed a 49MHz clock from a Cypress CY22393 programmable > clock, which itself is sourced from a crystal oscillator. CLK0 is fed > back via a BUFG to the CLKFB pin. The DLL is configured to divide by What is the jitter specification of the clock comming from the cypress?? The Spartan-II demoboard from Insight comes with a programmable low cost oscillator (some kindo of RC-type), it has a HELL lot of jitter. The DLL cant lock reliable to a jittery signal. Is the clock present AND stable before the FPGA is configured? -- MfG FalkArticle: 51627
john jakson wrote: [a great description of working with ffts in hw, thanks] > If you try to > use HandelC to do real hw design at the level of detail hw guys would > get into, I'd say you beating a dead horse. I'll take your word for it, as that's kind of what I expected > Seeing your web site, if you are familiar with Occam & its view of > Processes, then you will know exactly where HandelC is coming from, > since its is just the C version of Occam and Occam is basically a HDL > simulator. Occam isn't used in supercomputing. It's all C and Fortran. The reason is the people that write the codes do science first and software second. > If you have a more interesting app than FFT, lets us know. I've been staying away from the app because it's fuzzy but this is it: Someone writes a fortran/C program for use on a single processor. They then add directives on how to parallelize it. It's important to realize that the person that wrote this program barely wants to mess with parallelism, not to mention hardware design. Now, the whole game with parallel computing is to keep the alus busy. The alus are plenty fast but getting data to and from them is difficult. It's worse when you tie 1000 processors together. Mostly, if people get 15% of peak speed they're happy. What's worse is the contortions that people have to go through to make use of the hardware. It's said that software lags hardware but I believe that's because the hardware is such a pain to use efficiently. It would be much easier for people to use parallel computers if they weren't so difficult to squeeze work out of. What I'm wondering is rather than use a one size fits all communication interconnect it wouldn't be better to use an fpga fabric (with lots of cpus embedded) that could be specialized for each program. So if a chunk of code includes something along the lines of do j=1,n do i=1,n a(i,j) = k1*b(i+1,j) + k2*b(i-1,j) enddo enddo and this runs across 1000 processors then the fpga would contain circuitry to handle communication and synchronization for elements not on the local processor. There are lots of ways to handle this depending on the context of the loop nest and a few other things. It would be best if the values of B needed on different processors were sent as soon as they were generated. So the cpu might write the value and address to a port in the fpga, which would in turn send a message to the fpga on the correct node, which would would hold it in some sort of cache like memory until the local processor asked for it. If it could figure out how to write it to main memory that would be better. But it could be that this loop nest is rewritten to first send a fetch request of all the neighbor's b values needed on this processor, execute the internal loops, wait on the fetch request, and then do the edge elements. Different circuitry is needed to handle global reductions (sum, max, etc), fetch requests where the processor isn't known until run-time, support for indirect addressing (a[i] = k*b[x[i]]), and lots of other scenarios. There are conferences on reconfigurable computing so I assume people are working on this sort of thing. There's even a machine, the SRC-6, but I haven't heard from them in awhile so I assume it's not so simple. But I'd like to know what the problems are. The hardware sounds reasonable: Lots of fpgas around a cpu and memory with some inbetween. The compiler that takes fortran and decides what communication should be put in hardware doesn't sound too bad. That communication has to be translated to verilog. If the scope of the problem is limited to reading and writing memory, sending and receiving packets, adding internal cache like objects, and synchronization, would this be difficult? Finally comes the rest of the compilation to fpga. Can this all be easily handled by current tools? It sounds like people still get involved with floor planning (I don't know about routing). Hope this helps. MattArticle: 51628
Assuming you are using ISE 4 or 5: If you want to simply make a new project using all the same source files, just do a "File -> Save Project As..." and you'll create a new project with a new name...using all the same files as your original project. The only thing different will be the project name. If you want to clone the whole project, files and all, simply use Explorer to make a copy of the entire project directory, using a new name. Go into the new directory and change the filename portion of the project ".npl" filename to reflect your new directory's name. The first time you "Implement Design" you'll get a blizzard of filename warnings...which you can completely ignore. "flora" <floreq10@hotmail.com> wrote in message news:552bda4b.0301170324.4b3d26c7@posting.google.com... > Hello, > how can we rename an implementation project within the Xilinx tool. > I want to create a clone copy of a project by giving it another name > ThanksArticle: 51629
Nope. Xilinx's "Answer Record 9632" specifically says neither SPI nor I2C will work for serial configuration on Xilinx devices. I tried it myself...doesn't work...at least on Virtex and Spartan devices. "Michael Wilspang" <michael@wilspang.dk> wrote in message news:3e280bd9$0$71705$edfadb0f@dread11.news.tele.dk... > Are it possible to boot Spartan IIE in "Slave Serial Mode" directly > from an microcontroller with SPI interface (master mode) ? > > > /Michael Wilspang > >Article: 51630
Kevin Brace wrote: > Steve, > > Since Virtex-E up to 300K system gates is supported by ISE WebPACK, why > not Xilinx support Virtex up to 300K system gates in ISE WebPACK? The main reason is size. The WebPACK is currently about 180Mb. If we included all Spartan and Virtex families, it would get to 300Mb. So, we have decided to support 2 Spartan families and 2 Virtex families (plus 1 device from a newer family that we want to promote). We are trying to keep the WebPACK simple by keeping the size reasonable and the installation easy. Steve > > > Kevin Brace (If someone wants to respond to what I wrote, I prefer if > you will do so within the newsgroup.) > > Steve Lass wrote: > > > > David, > > > > Virtex was never supported in WebPACK. The first Virtex family supported > > in the WebPACK was VirtexE. You will need to purchase ISE BaseX to do > > a XCV50. > > > > Steve > >Article: 51631
A hacked license??? That's the usual cause. ModelSim detects a hacked license in many different ways, but the Wave display code seems to be the most thorough in checking. If you're crashing back to the desktop from the Wave window, you most almost certainly have a bad license file. Otherwise, there was a ModelSim version, I think it was 4.1-something, that did something like this under WinXP when WinXP first came out. Upgrading ModelSim is the only cure. <scepan@serbiancafe.com> wrote in message news:17d2d0ec.0301170614.441d72be@posting.google.com... > My friend and myself have bought laptops and tried to run Modelsim on > them. Everything works fine till we try to see some signals in the > waveform. The waveform itself opens normaly, but when some signals are > tried to be added into it, Modelsim crashes. The same problem occures > on both of our computers. Does anybody know the reason?Article: 51632
Prashant, What everyone is trying to tell you: Yes, everything you want to do is possible, BUT... You are going to need to treat the devices like they all have different clocks. The fact that they all run at 40MHz is irrevelant. The internal differences can not be ignored. Do a web search on transferring data between different clock domains. (Xilinx has a couple of app notes on their web site. IIRC, Peter wrote them, and they're good.) It's a common problem that everyone has to face. SH prashantj@usa.net (Prashant) wrote in message news:<ea62e09.0301170743.1b3a638e@posting.google.com>... > > You get rid of long-term differences, yes. But you still have to employ > > interface protocols to transfer data from one FPGA to the other. > > > > Marc > > I think it is these interface protocols that I'm trying to get an idea > about (And I still dont have a lot). I also realized that my FPGA > board does not have a clk out (unlike my mention of it in the 1st > posting). Which means that while the board can accept external clocks, > I dont see a way to send a clock out from the board. Any ideas how > that can be achieved. > > Thanks, > PrashantArticle: 51633
Is there a company out there that makes anything close to what LeCroy Research Systems used to make? Anyone know of people who used design the boards at LRS? Mu Young Lee Thousand Oaks, CAArticle: 51634
The FPGA gains its advantage by creating a circuit optimized to the particular task, not by replicating CPUs. For example, in DNA pattern matching, hardware is built specifically to search patterns in a parallel hardware circuit rather than doing a linear search through the space one pattern at a time. The hardware for these optimized tasks is almost never the structure that would be inferred by unrolling software code. The costs and goals are entirely different, therefore an efficient solution in one domain is quite different than an efficient solution in the other. One common stumbling block is floating point. General purpose processors with floating point need to be able to cover an extremely wide dynamic range, and usually have fairly large mantissas to maintain precision in accumulations. Most single applications only use a small portion of that dynamic range, and many times a fixed point realization is as good or even better and costs less to implement. For applications that require the dynamic range offered by floating point, there are things that can be done to reduce the hardware, from reduced word sizes, to renormalizing less frequently...in the case of the FFT, one can separate out a common exponent before the FFT kernel, operate the kernel in fixed point on just the denormalized mantissa, and then renormalize afterwards to get back into floating point format and marry the passed around common exponent back with the data. This results in considerably less hardware than a full floating point implementation, and you don't give up anything in accuracy. Regarding the FFT, Cooley Tukey is more or less a software algorithm. There are other factorizations that significantly reduce the number of multiplies. For example, a radix 16 Winograd FFT only requires 14 multiplies using only 6 unique (including unity) multiplicands. The trade is a significantly more complex data reordering, but for 16 points that is not onerous. Our 8 and 16 point FFT kernel cores, which are much smaller and faster than the xilinx cores are based on the Winograd factorization. Matt wrote: > john jakson wrote: > [a great description of working with ffts in hw, thanks] > > > If you try to > > use HandelC to do real hw design at the level of detail hw guys would > > get into, I'd say you beating a dead horse. > > I'll take your word for it, as that's kind of what I expected > > > Seeing your web site, if you are familiar with Occam & its view of > > Processes, then you will know exactly where HandelC is coming from, > > since its is just the C version of Occam and Occam is basically a HDL > > simulator. > > Occam isn't used in supercomputing. It's all C and Fortran. The reason > is the people that write the codes do science first and software second. > > > If you have a more interesting app than FFT, lets us know. > > I've been staying away from the app because it's fuzzy but this is it: > Someone writes a fortran/C program for use on a single processor. They > then add directives on how to parallelize it. It's important to realize > that the person that wrote this program barely wants to mess with > parallelism, not to mention hardware design. Now, the whole game with > parallel computing is to keep the alus busy. The alus are plenty fast > but getting data to and from them is difficult. It's worse when you tie > 1000 processors together. Mostly, if people get 15% of peak speed > they're happy. What's worse is the contortions that people have to go > through to make use of the hardware. It's said that software lags > hardware but I believe that's because the hardware is such a pain to use > efficiently. It would be much easier for people to use parallel > computers if they weren't so difficult to squeeze work out of. > > What I'm wondering is rather than use a one size fits all communication > interconnect it wouldn't be better to use an fpga fabric (with lots of > cpus embedded) that could be specialized for each program. So if a chunk > of code includes something along the lines of > > do j=1,n > do i=1,n > a(i,j) = k1*b(i+1,j) + k2*b(i-1,j) > enddo > enddo > > and this runs across 1000 processors then the fpga would contain > circuitry to handle communication and synchronization for elements not > on the local processor. There are lots of ways to handle this depending > on the context of the loop nest and a few other things. It would be best > if the values of B needed on different processors were sent as soon as > they were generated. So the cpu might write the value and address to a > port in the fpga, which would in turn send a message to the fpga on the > correct node, which would would hold it in some sort of cache like > memory until the local processor asked for it. If it could figure out > how to write it to main memory that would be better. But it could be > that this loop nest is rewritten to first send a fetch request of all > the neighbor's b values needed on this processor, execute the internal > loops, wait on the fetch request, and then do the edge elements. > > Different circuitry is needed to handle global reductions (sum, max, > etc), fetch requests where the processor isn't known until run-time, > support for indirect addressing (a[i] = k*b[x[i]]), and lots of other > scenarios. > > There are conferences on reconfigurable computing so I assume people are > working on this sort of thing. There's even a machine, the SRC-6, but I > haven't heard from them in awhile so I assume it's not so simple. But > I'd like to know what the problems are. > > The hardware sounds reasonable: Lots of fpgas around a cpu and memory > with some inbetween. The compiler that takes fortran and decides what > communication should be put in hardware doesn't sound too bad. That > communication has to be translated to verilog. If the scope of the > problem is limited to reading and writing memory, sending and receiving > packets, adding internal cache like objects, and synchronization, would > this be difficult? > > Finally comes the rest of the compilation to fpga. Can this all be > easily handled by current tools? It sounds like people still get > involved with floor planning (I don't know about routing). > > Hope this helps. > > Matt -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 51635
Are you sure? I may be wrong about this (my PC with the Webpack installation is not available to me right now) but I think I remember seeing directories of Virtex (and possibly some other non-supported families) configuration data included in the installation. David "Steve Lass" <lass@xilinx.com> wrote in message news:3E2848B6.D3BDFADE@xilinx.com... > Kevin Brace wrote: > > > Steve, > > > > Since Virtex-E up to 300K system gates is supported by ISE WebPACK, why > > not Xilinx support Virtex up to 300K system gates in ISE WebPACK? > > The main reason is size. The WebPACK is currently about 180Mb. If we included > all Spartan and Virtex families, it would get to 300Mb. So, we have decided to > support 2 Spartan families and 2 Virtex families (plus 1 device from a newer > family that we want to promote). We are trying to keep the WebPACK simple by > keeping the size reasonable and the installation easy. > > Steve > > > > > > > Kevin Brace (If someone wants to respond to what I wrote, I prefer if > > you will do so within the newsgroup.) > > > > Steve Lass wrote: > > > > > > David, > > > > > > Virtex was never supported in WebPACK. The first Virtex family supported > > > in the WebPACK was VirtexE. You will need to purchase ISE BaseX to do > > > a XCV50. > > > > > > Steve > > > >Article: 51636
I need to evaluate Synopsys Design Compiler synthesis tool to port an FPGA design to ASIC. I know that Synopsys Solvet can be only accessed by IDentified members directly registered with Synopsys, itīs no valid a secundary user like a user of Xilinx or Altera design flow with comes with Synopsys FPGA Compiler. Is there a way to access Solvnet for download Design Compiler? Thanks.Article: 51638
Austin Lesea <austin.lesea@xilinx.com> wrote in message news:<3E258425.7C2ED314@xilinx.com>... > > Cisa wrote: > > > Now I have a clk whose frequency is 30.72,and I want to use DCM > > to generate another clk whose frequency is 1.28MHz. > > How can I get it?I failed in reality.Pls give me some advance. > > Cisa, > > The DCM outputs (all execpt for the CLKDV and CLK2X) have a minimum > output frequency of 24 MHz. > > CLK2X is 48 MHx min, and CLKDV is 24/16 MHz (1.5) min. > > I suggest to use a simple synchronous counter to simply divide by 24 > (synchronously). At these low frequencies, you do not need the 100 ps > alignment offered by the DCM. Or perhaps use the DCM to divide by 12, then use a single flop to do the final divide by two... MarcArticle: 51639
Hi Falk and Patrick Falk; have you tried this interface solution ? But Patricks experience says something else !!! I'm confused! -- /Michael WilspangArticle: 51641
hi, iam a newbie to the field of fpga and iam very much interested in this field. im workin with virtex 2 fpga board(xilinx). iam a graduate student. ive been goin through soo many materials. iam interested in the practical aspect of fpga implementaion. i mean application side. iam not sure which part of the fpga i should concentrate to make use of the fpga board( xc2v1000 ). ive been goin through the following topics. can you pleae lemme know other important aspects i have to cover to bcom a good fpga designer. 1) vhdl coding( we have to use vhdl) 2) schematic 3) architecture of fpga board. 4) fucntional part of fpga board. regards naveenArticle: 51642
Langmann wrote: > Are you sure? I may be wrong about this (my PC with the Webpack installation > is not available to me right now) but I think I remember seeing directories > of Virtex (and possibly some other non-supported families) configuration > data included in the installation. We include these directories for a variety of reasons, like: - VirtexE has a lot of common elements with Virtex, so the VirtexE tools get data from the Virtex directories. - The programming daisy chain may contain other devices, so we include programming info for all devices. Steve > > David > > "Steve Lass" <lass@xilinx.com> wrote in message > news:3E2848B6.D3BDFADE@xilinx.com... > > Kevin Brace wrote: > > > > > Steve, > > > > > > Since Virtex-E up to 300K system gates is supported by ISE WebPACK, why > > > not Xilinx support Virtex up to 300K system gates in ISE WebPACK? > > > > The main reason is size. The WebPACK is currently about 180Mb. If we > included > > all Spartan and Virtex families, it would get to 300Mb. So, we have > decided to > > support 2 Spartan families and 2 Virtex families (plus 1 device from a > newer > > family that we want to promote). We are trying to keep the WebPACK simple > by > > keeping the size reasonable and the installation easy. > > > > Steve > > > > > > > > > > > Kevin Brace (If someone wants to respond to what I wrote, I prefer if > > > you will do so within the newsgroup.) > > > > > > Steve Lass wrote: > > > > > > > > David, > > > > > > > > Virtex was never supported in WebPACK. The first Virtex family > supported > > > > in the WebPACK was VirtexE. You will need to purchase ISE BaseX to > do > > > > a XCV50. > > > > > > > > Steve > > > > > >Article: 51643
Marc, Gosh, too easy! Austin Marc Randolph wrote: > Austin Lesea <austin.lesea@xilinx.com> wrote in message news:<3E258425.7C2ED314@xilinx.com>... > > > > Cisa wrote: > > > > > Now I have a clk whose frequency is 30.72,and I want to use DCM > > > to generate another clk whose frequency is 1.28MHz. > > > How can I get it?I failed in reality.Pls give me some advance. > > > > Cisa, > > > > The DCM outputs (all execpt for the CLKDV and CLK2X) have a minimum > > output frequency of 24 MHz. > > > > CLK2X is 48 MHx min, and CLKDV is 24/16 MHz (1.5) min. > > > > I suggest to use a simple synchronous counter to simply divide by 24 > > (synchronously). At these low frequencies, you do not need the 100 ps > > alignment offered by the DCM. > > Or perhaps use the DCM to divide by 12, then use a single flop to do > the final divide by two... > > MarcArticle: 51644
I wrote: > Have you seen such a manual for a C compiler? "Austin Franklin" <austin@da98rkroom.com> wrote: > Er, yes. I wrote: > Where? "Austin Franklin" <austin@da98rkroom.com> wrote: > On my book shelf. Well, don't keep us in suspense. Tell us what manual it is. I'd like to see for myself this manual that allows one to predict what code will be generated from an arbitrary C function.Article: 51645
I have recently been driven to switch to Synplify synthesis, from XST, for a fairly complex PCI core VHDL design targeting a Virtex 300 in order to work around various ISE tool bugs (which is another story all together). After a fairly painful conversion of constraints (for both timespecs and previous floorplannning) and getting the design to actually compile again, I have noted that the logic paths now failing to meet my timespecs are in logic that previously was not a problem with XST synthesis, although there were problems paths then also. At first glance, my "gut" says that there are more logic levels (LUTs) being used in the problematic paths, but I don't have a direct proof of that, since I never actually looked at these specific paths with the XST runs. I am using the identical "max" effort settings for PAR settings as before the synthesis tool switch. I was wondering if any of the experts here have had any similar or different experiences regarding the "performance" of these two tools on the same source design. Or perhaps someone could point me to some links to any "objective" comparison of these tools and/or information regarding what types of logic each is better/worse at optimizing? -- Roger Green B F Systems - Electronic Design Consultants www.bfsystems.comArticle: 51646
Ray Andraka wrote: > > The FPGA gains its advantage by creating a circuit optimized to the > particular task, not by replicating CPUs. For example, in DNA pattern Maybe it wasn't clear but I'm not interested in replacing CPUs, only the hardware used for inter-processor communication. The only way I could see replacing the CPU is if something like a vector unit could be built that would be substantially faster than a CPU. A 64 bit wide floating point vector unit sounds beyond current capabilities. Maybe if a 1 bit wide floating point alu could be built then 50 of these could be put on a chip. Tie that to dual ported memory so the reconfigurable part could read and write vector registers while the vector unit operates and that would be interesting. A nearby cpu could do scalar ops, set up the communications part of the fpga, and issue vector instructions. MattArticle: 51647
On Fri, 17 Jan 2003 13:41:53 -0500, Mu Young Lee <muyoung@umich.edu> wrote: >Is there a company out there that makes anything close to what LeCroy >Research Systems used to make? Anyone know of people who used design the >boards at LRS? > >Mu Young Lee >Thousand Oaks, CA My company makes some VME and CAMAC modules that do some of the high-speed timing stuff that LeCroy used to do. We don't do high voltage or FastBus, though. http://www.highlandtechnology.com/ Also check out Phillips Scientific, Jorger, and Jorway. See vita.org if you're interested in VME. What sort of stuff are you working on? JohnArticle: 51648
Roger Green wrote: > I was wondering if any of the experts here have had any similar or different > experiences regarding the "performance" of these two tools on the same > source design. Or perhaps someone could point me to some links to any > "objective" comparison of these tools and/or information regarding what > types of logic each is better/worse at optimizing? Maybe you are comparing XST post-route results to Synplify pre-route timing estimates. Place and route your Synplify netlist before making timing comparisons. Synthesis is optimized for synchronous inputs and registered outputs. The closer you can be to this standard template the better. -- Mike TreselerArticle: 51649
On Fri, 17 Jan 2003 11:47:48 -0800, Michael Wilspang wrote: > Hi Falk and Patrick > > Falk; have you tried this interface solution ? > > But Patricks experience says something else !!! > > I'm confused! > -- > > /Michael Wilspang We've done it with a PIC and it works fine. The configuration interface is not SPI, but it is a simple synchronous serial interface, the PIC has no trouble configuring (at least SpartanII chips)... Peter Wallace
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z