Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi, Just wondering if anyone has used, or tried to use this device yet. I have a design I did for the ProAsic+ that I converted to use the PA3 as a test. Went fairly well. Only real issue I had was that when I set the option for Designer or move flip flops to the IO cells, it did not implement them correctly. Specifically the async resets where the wrong sense. Since The device I am targeting is not in production yet so I can only simulate the back annotated design. This is where I discovered the problem so I do not know if the problem is with the simulation model or with designer. I had a couple of problems with the Plus and notice some changes to the PA3 data sheet; primarily concerned with the JTAG TRST pin. Under the pin description section, They "recommend" the following: "TRST Boundary Scan Reset Pin The TRST pin functions as an active low input to asynchronously initialize (or reset) the boundary scan circuitry. There is an internal weak pull-up resistor on the TRST pin. In the operating mode, a 100 ? external pulldown resistor should be placed between TRST and GND to ensure that the chip does not switch into a different mode." I had a power-up problem on some of the Plus device and found out that if I ground the TRST pin , the device would start working. I report this to Actel and had them evaluate the parts that exhibited the problem. There recommendation: "ground the TRST pin". Sounds to me like there is a problem with the TAP port on both the Plus and PA3 devices and the 100 ? resistor is the "patch" to fix it. I am curious if anyone else has had problems with the TRST pin. The other problem I have had is a high programming failure rate while using the FlashPro programmer, mostly exit 11 errors. Actel was not able to helps us solve this problem. We did not press them on this since eventually we would be getting the devices programmed by our supplier and it would become a non-issue after that. The curious thing is that if a device successfully programmed the first time, it would more then likely always program successfully. I have reprogram a single device 50 to 60 times with no problem. I suspect a marginal problem with the device itself or a problem with the programming algorithm and not with my programming fixture. It bothers me that Actel will not admit problems with their devices. Xilinx has no problem with admitting problems with devices and then publishing a work around to the problem until a permanent fix to the silicon is implemented. Why is Actel reluctant to do this. Maybe this problem with the TRST was already know to them, and if they had published an errata on this then maybe I would not have spent over a week debugging this problem. Why do I use Actel if I am unhappy with there devices? the truth is it is the only reprogrammable FPGA that fit the application. Would like to hear about any experiences that other people have had with the Actel Flash parts. Thanks Dave ColsonArticle: 78876
ALuPin wrote: > Hi, > maybe someone can give her/his opinion concerning the following question: > Thank you in advance. > > > I have a FIFO template with one write clock and one read clock. These > clocks are full asynchronous to each other. > > Apart from that I have an asynchronous reset port in the FIFO. > > My question: > > Let us assume that I sychronize the asynchronous reset coming from FPGA > input pin > in a flip flop chain to synchronize it to the write clock > > and in a second flip flop chain to synchronize it to the read clock. > > Which synchronized reset do I have to use > to reset the FIFO in a safe manner ? Really the FIFO design needs to handle synchronizing the single async input to both clocks internally. I would suggest synchronizing to the write clock and then making sure your write logic stays quiescent (doesn't start writing) for a few cycles after reset in case the FIFO hasn't finished resetting the read state logic. In the Xilinx COREgen designs, reset asserts both FULL and EMPTY, which effectively prevents reads or writes during the reset condition and until each of these signals de-asserts synchronously to its respective clock. >=20 > Rgds > Andr=E9Article: 78877
This week the Actel FAEs came in to tell us about a problem with the pro-ASIC plus chip. It seems the flash retention rate is much lower than expected when using designer prior to 6.1 due to a load balancing issue across transistors. Unfortunately, they don't have a write-up of this problem on their website yet.Article: 78878
> Hi all, > I'm thinking about a new board for JOP (or MB, NIOS). The board should be > small and > cheap (below the S3 Starter Kit). It should only contain the absolute > necessary parts for a > CPU design. Here is the suggested part list: > > What do you guys think about this idea? Does it make sense to build a > another FPGA board? Martin, If you do decide to build a new proto board make sure you've got some form of securely mounting it. If you go with a SimmStick this is taken care of, but I had problems mounting your Cyclone JOP boards for prototyping. I didn't have time to design/build a motherboard to plug the modules into to get access to the connectors so had to build a complicated jig to hold them securely. Nial. ------------------------------------------------------------- Nial Stewart Developments Ltd FPGA and High Speed Digital Design www.nialstewartdevelopments.co.ukArticle: 78879
Hello, Actually I don't know where you put EDK. For example I put mine in ~/bin/EDK and I have also some xilinx stuff in ~/bin/xilinx. To use XPS I first start a terminal and I switch to csh by typing: bertrand@seraphin:~/bin/EDK $ csh seraphin:~/bin/EDK> Then I have to source 2 configuration files (one from xilinx, the other from EDK): seraphin:~/bin/EDK> source ~/bin/xilinx/setting.csh seraphin:~/bin/EDK> source ~/bin/EDK/setup.csh Note: I had to modify this last file as it was using bash syntax and not csh syntax. It now looks like this: setenv XILINX_EDK /home/bertrand/bin/EDK setenv LD_LIBRARY_PATH ${XILINX_EDK}/bin/lin:${LD_LIBRARY_PATH} setenv PATH ${XILINX_EDK}/bin/lin:${XILINX_EDK}/gnu/microblaze/lin/bin:${XILINX_EDK}/gnu/powerpc-eabi/lin/bin:${PATH} Then I just need to start XPS by doing: seraphin:~/bin/EDK> XPS_GUI If your X server allows TCP connections it should work. Hope it helps. Bertrand PS: Actually I also modified csh configuration files so that it loads automatically the 2 config files above when I start it. I have a .tcshrc file in my home directory containing: source ~/bin/xilinx/settings.csh source ~/bin/EDK/setup.cshArticle: 78880
XilinxCoreLib is necessary for simulating xilinx cores. Unisim may also be necessary. translate on and translate off pragmas are for synthesis of the code, and shouldn't affect simulation.Article: 78881
Hi, I am looking for data showing typical development times for FPGA based designs. I am trying to compare the cost of implementing a complicated DSP system with different signal processing architectures. I have a good idea of how long the development time would be for a progammable DSP and for an ASIC, but I don't have a feeling for how long the development time would be with an FPGA based solution. I have tried the cost calculator from Xilinx, but I have to say I am quite skeptical in regards to the cost estimate. Why would the development cost for an FPGA system be so much cheaper than an ASIC solution? An ASIC solution at 90nm is generally quoted as $20-30M, but an FPGA solution is supposedly "almost free". The only difference I see between the two are in certain aspects of the back end flow, mask costs, packaging design, and production costs. The up fron design and verification work would be the same for both platforms I assume. What am I missing? CrazydArticle: 78882
"crazyd" <olofsson@yahoo.com> wrote in news:1107964781.979024.297810 @z14g2000cwz.googlegroups.com: > Hi, > > I am looking for data showing typical development times for FPGA based > designs. I am trying to compare the cost of implementing a complicated > DSP system with different signal processing architectures. I have a > good idea of how long the development time would be for a progammable > DSP and for an ASIC, but I don't have a feeling for how long the > development time would be with an FPGA based solution. I have tried > the cost calculator from Xilinx, but I have to say I am quite skeptical > in regards to the cost estimate. Why would the development cost for an > FPGA system be so much cheaper than an ASIC solution? An ASIC solution > at 90nm is generally quoted as $20-30M, but an FPGA solution is > supposedly "almost free". The only difference I see between the two > are in certain aspects of the back end flow, mask costs, packaging > design, and production costs. The up fron design and verification work > would be the same for both platforms I assume. What am I missing? > > Crazyd > > This is a question that is virtually impossible to answer out of context. We are releasing a couple of boards that combine a SHARC DSP and Cyclone FPGA. I think it is much easier to use the DSP for DSP and general purpose control, but that rolling certain go fast algorithms into the FPGA would not take very much time. -- Al Clark Danville Signal Processing, Inc. -------------------------------------------------------------------- Purveyors of Fine DSP Hardware and other Cool Stuff Available at http://www.danvillesignal.comArticle: 78883
Hi, when trying to sample a global clock as shown in the process, the QuartusII 4.2 Design Assistant shows the critical warning that clocks should only feed input clock ports. So what can I do about that ? I need to sample the clock to find out where the center of the clock period is. I also get the critical warning that there are delay chains in my synchronous design. The Design Assistant tool recommends not to use any delay chains in synchronous designs. I do not know how the delay chains did arise. So how can I comply with the Design Assistant tool to remove the delay chains ? Thank you very much for your help. process(Clk_fpga_fast) begin if rising_edge(Clk_fpga_fast) then l_sample1 <= '0'; l_sample2 <= '0'; l_sample3 <= '0'; l_sample4 <= '0'; l_sample5 <= '0'; l_sample6 <= '0'; if Enable_highspeed='0' then l_sample1 <= Clk_fs; l_sample2 <= l_sample1; l_sample3 <= l_sample2; l_sample4 <= l_sample3; l_sample5 <= l_sample4; l_sample6 <= l_sample5; end if; end if; end process;Article: 78884
Hello, I'd like to know more about local clocking in the spartan 3. On some xilinx app notes, they're referring to a XAPP769 "Local clocking in spartan 3 family devices" However I can't find it anywhere ... So where can I find infos about this technique ? Thanks, Sylvain MunautArticle: 78885
Nial Stewart wrote: > "Jedi" <me@aol.com> wrote in message news:H_MNd.332$074.174@read3.inet.fi... > >>Martin Schoeberl wrote: >> >>>Have you allready done it? A small code snippet would be nice. >> >>http://www.fpga.ch/forum/viewtopic.php?t=3 >>enjoy (o; >>rick > > > The only thing you have to worry about here is /OE, it's an active low > output > enable for the FPGA to drive the interface. Well.../OE should be obvious to all (o; > > Also note that when you're programming the EPCSX/replacement you have to > bit reverse the bits of each byte from the RPD file (I think it's called > from memory). This is documented in the EPCS datasheet...not the RPD file but the bitorder the FPGA reads in. rick > > I have had this working by hand (ie manually driving my interface design) > but > haven't got software to drive it properly yet. I'll hopefully get some time > to > look at this soon and will release any results. > > I don't know why Altera didn't document this feature, it's a real selling > point > and I hope CycloneII includes the same core. > > Hope this helps, > > Nial > > > ------------------------------------------------------------- > Nial Stewart Developments Ltd > FPGA and High Speed Digital Design > www.nialstewartdevelopments.co.uk > >Article: 78886
> Hi, > I am looking for data showing typical development times for FPGA based > designs. I am trying to compare the cost of implementing a complicated > DSP system with different signal processing architectures. > I have tried > the cost calculator from Xilinx, but I have to say I am quite skeptical > in regards to the cost estimate. Aye, well they are going to be trying to sell you an FPGA solution ;-) > Why would the development cost for an > FPGA system be so much cheaper than an ASIC solution? An ASIC solution > at 90nm is generally quoted as $20-30M, but an FPGA solution is > supposedly "almost free". The only difference I see between the two > are in certain aspects of the back end flow, mask costs, packaging > design, and production costs. The up fron design and verification work > would be the same for both platforms I assume. What am I missing? The verification of an FPGA design can be done incrementally as it grows. I have read of several ASIC developments that have used FPGAsolutions to allow (lower performance) system testing before committing the ASIC to production. You can also add test/verification functionality to FPGA desings to test various aspects of your design, and thorough system tests can be carried out much more quickly than things can be simulated. The up front engineer hour cost of the design development will probably be roughly the same for the two development approaches though. Nial. ------------------------------------------------------------- Nial Stewart Developments Ltd FPGA and High Speed Digital Design www.nialstewartdevelopments.co.ukArticle: 78887
>From my own experiences in designing both ASICs and FPGAs for DSP I would agree that the main difference is going to be in the back-end physical design flow and the associated NRE costs. There is no difference in terms of methodology or cost for the front-end design work: HDL coding, verification, regression test development etc. Modern FPGAs have complexity rivalling ASICs of just a few years ago and consequently require the same discipline in design methodology. All that being said I do think there is roughly an order of magnitude cost difference in developing an ASIC vs. FPGA. The physical design portion of ASIC development is a very complex undertaking and mask costs are exploding. And if you have a serious bug in the ASIC, that doesn't have some kind of software work-around, you can spend several 100 k for more mask changes to implement the ECO and an associated schedule hit of months. Because of the consequent paranoia of ASIC bugs and their associated expense, teams tend to take more time in the verification effort. So your development time is quite a bit longer than with an FPGA where a bug is not so tragic. With an FPGA, if you missed something in verification and that bug shows up in the system bring-up, your only costs to fix it will be the debug time and redesign time. Plus, you can make much more extensive fixes, even feature upgrades, provided they fit into your chosen device. You're not limited to changes constrained by what can be implemented through metal mask changes as in ASICs. The flexibility of the FPGA is a big bonus if the device can meet your speed, power and cost targets. One last thing: in DSP applications it is often true that you need to prototype the system and fine tune the algorithms because modelling may not cover everything. One example is the design of radio channel modems. Its tough to completely model the channel and be sure your algorithms work under all conditions. With an FPGA implementation significant changes to the algorithm are still possible during field trials on the platform that will actually ship. This may not be true for an ASIC implementation (depending on the degree of its configurability and programmability). -PaulArticle: 78888
Hello, Some time ago I bought a box of surplus EPM7128SLC84 CPLD's for the more advanced pupils at my school to have a play with. It was to be a learning exercise for us all, something different to MCU's. Unfortunately, I've just found out (I think) that they all have their jtag pins programmed as user I/O. I can't program them with my byteblaster, although it works fine with a smaller (new) CPLD I bought. I wasn't aware of this potential problem before. The error from Quartus Programmer is: Info: Unrecognized Device Error: JTAG ID code specified in JEDEC STAPL Format File does not match any valid JTAG ID codes for device Error: Operation failed Nor will the Quartus programmer auto-identify the devices, saying "Can't scan jtag chain", again it works fine with the new one. First, does my assumption here sound correct? Second, is there any way to clear the jtag pins without a Master Programmer? (Google thinks the answer is no, but it's worth asking) Third, is there anyone who has a Master Programmer or other capable programmer (in the UK, preferably near Manchester) and would be prepared to help me out and reprogram about 25 CPLD's? I don't mind donating a few beer tokens if it will help! Rick Fox (Belmont Special School) Manchester, UK.Article: 78889
Our current run time comparisons show that for the same hardware, the XP and Linux results are within 5% of each other. Either OS will work fine as far as Quartus is concerned. Hope this helps, Subroto Datta nachum wrote: > I have read that quartus is optimized to take advantage of the SSE > instruction sets in Linux, yet not in Windows. Can I assume from this > that quartus will run faster on Linux? > > I am buying a new computer, and I want to decide what OS to run, and if > Linux will be faster for FPGA design, than that is what I will run. > > Thanx again, > > nachumArticle: 78890
I am using an external USB2.0 HD with 250GB. This is very fast and gives one all the storage required for development. Another point is that when I go home at the weekend I only take the HD with me and not the whole notebook. Regards Thomas John Adair wrote: >I have an ACER 1502, almost an earlier model of the Ferrari. It is very >quick. The only thing that lets it down is the disk speed but this is a >common issue for power users with laptops. There are faster laptop drives so >it is worth finding out disk spec on any given laptop. > >I have done benchmarking in the past on Intel v AMD and found one faster on >some parts of the flow whilst the other does better on the other bits. >Overall when I ran these tests the factors balanced out and ther was not >much difference. These results will vary as design but it would be >interesting to know how the latest batch offerings perform. > >John Adair >Enterpoint Ltd. - Home of Broaddown2. The Ultimate Spartan3 Development >Board. >http://www.enterpoint.co.uk > > >"nachum" <nachumk@gmail.com> wrote in message >news:1107956041.878727.33690@g14g2000cwa.googlegroups.com... > > >>I have been searching for a good laptop for fpga design. My research >>(??) indicates that AMD Athlon 64 chips perform faster than P4 chips on >>average. Of course it depends on which P4, and which AMD, but for the >>same cost it seems that AMD wins. >> >>Can anyone tell me if this is correct? >> >>Based on this assumption, I have only found 2 Brand Name AMD based >>laptops. The HP - Compaq design based - R3000Z series, and the Acer >>Ferrari. There are also some sites that do custom laptops, but I don't >>really like that idea at all. I am also looking for a light to medium >>weight laptop, nothing heavy like some of the big Toshibas. >> >>Any advice or experience in this area would be welcome, >> >>Thank you, >>nachum >> >> >> > > > >Article: 78891
Hi Ann, I'm not sure what you are trying to do, but maybe the techXclusives article on reconfiguring block RAMs below will help. http://www.xilinx.com/xlnx/xweb/xil_tx_display.jsp?sGlobalNavPick=&sSecondaryNavPick=&category=&iLanguageID=1&multPartNum=1&sTechX_ID=krs_blockRAM Philip Nowe www.dulseelectronics.com "Ann" <ann.lai@analog.com> wrote in message news:ee8b8e5.11@webx.sUN8CHnE... > Hi Gabor, > > Did you remember how to do it? Do you have example code? I don't > understand the inputs and outputs in the example that I posted above. Is > it true that all I need is to put an instance of BSCAN in the code?Article: 78892
"Jedi" <me@aol.com> wrote in message news:H_MNd.332$074.174@read3.inet.fi... > Martin Schoeberl wrote: >> Have you allready done it? A small code snippet would be nice. > http://www.fpga.ch/forum/viewtopic.php?t=3 > enjoy (o; > rick The only thing you have to worry about here is /OE, it's an active low output enable for the FPGA to drive the interface. Also note that when you're programming the EPCSX/replacement you have to bit reverse the bits of each byte from the RPD file (I think it's called from memory). I have had this working by hand (ie manually driving my interface design) but haven't got software to drive it properly yet. I'll hopefully get some time to look at this soon and will release any results. I don't know why Altera didn't document this feature, it's a real selling point and I hope CycloneII includes the same core. Hope this helps, Nial ------------------------------------------------------------- Nial Stewart Developments Ltd FPGA and High Speed Digital Design www.nialstewartdevelopments.co.ukArticle: 78893
Hello, I'm a total newbie to fpga configuration and EDK and so. At present I'm trying to add some cores to projects created from scratch with the base system builder, but I find it hard to succeed in doing it. I'd like my hardware to support DDR as the card I'm working on (Avnet XC4VLX25) has a ddr memory block. Actually my problem is concerning the "port" tab in the "add/edit cores" dialog: how can I know which ports I have to connect, and where should I connect them? Could anyone tell me where I am supposed to find this information? Thanks! BertrandArticle: 78894
Are you run out of block RAMs? "Ram" <rchandra@nuelight.com> wrote in message news:ee8bb17.-1@webx.sUN8CHnE... > Hi, I'm looking for Timing info on an Asynchronous FIFO built using Distributed RAM on Virtex4. the part # we are using is : LX100-10, package FF1513. Xilinx publishes the FIFO timing info for Asynchronous FIFOs using Block RAM, but not distributed RAM. Specifically, I'd like to know the timing of all the FIFO signals (FULL/EMPTY, ALMOST_FYLL/ALMOST_EMPTY and other FIFO status signals). If someone could shed some light on where I might find this info, I'd really appreciate it.Article: 78895
Bertrand Rousseau wrote: > Hello, > > I'm a total newbie to fpga configuration and EDK and so. At present I'm > trying to add some cores to projects created from scratch with the base > system builder, but I find it hard to succeed in doing it. > > I'd like my hardware to support DDR as the card I'm working on (Avnet > XC4VLX25) has a ddr memory block. Actually my problem is concerning the > "port" tab in the "add/edit cores" dialog: how can I know which ports I > have to connect, and where should I connect them? Could anyone tell me > where I am supposed to find this information? For every core there's a PDF with documentation explaining all the ports and parameters. If you double-click on the core in the schematic, there's a "doc"-button that opens the PDF. Or you can go to $EDK/hw/something, there's a folder for every core with a subfolder "doc", containing the PDF. cu, SeanArticle: 78896
> The USB 2.0 HS core would not be as easy to setup as the 1.1 core, just a friendly hint. > > Antti Hi, Is it possible to use only physical interface of USB? No standart USB handshake, no ACK-NACK, no control signals, no CRC. No USB protocol layer. No USB core functions.. Just high data tranfer Is it possible/easy? I'm asking because when i read SMSC GT3200 Phy ic datasheet, i understand reading and writing to USB port (physically) is very simple. What about PC software. Can we access usb without handshakes, controls etc.? thanks, yusufArticle: 78897
Thank you Bertrand. I have successfully started EDK under linux .Article: 78898
Thanks a lot. Is it also possible to know which ports are optional? I couldn't find any information about that in the pdf file.Article: 78899
Ok! From all the previous topic and other resources on the net I see following ckt touted to work for distributing async resets. always@(posedge clk or negedge ext_reset_l) if(!ext_reset_l) int_reset_l <= 2'b00; else begin int_reset_l[0] <= 1'b1; int_reset_l[1] <= int_reset_l[0]; end end assign chip_reset_l = int_reset_l[1]; My questions is - What happens if the second flop (int_reset_l[1]) goes metastable or if it cannot then why? I cannot understand why so many have recommended the above ckt. as a solid form of negating async reset syncly!? e.g. http://i.cmpnet.com/deepchip/downloads/cliff_resets.zip I would appreciate it much if someone can educate me on this? TIA.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z