Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Mike Treseler wrote: > ALuPin wrote: > >> Simulation might be a good first step in order to prove >> first correct functionality > > > Best first, second and third step. Especially given the possibility of testing using post-place-and-route simulation models with back-annotated signals, which include most of the timing issues that can crop up... >> but what if there are >> some components which cannot be reproduced my models so >> easily? I'm not sure I've correctly understood this question, but it seems to me that if you can't test a module in simulation, where you have access to _all_ the internal signals, you'll have an even harder time doing so when you only have access to a limited subset... Could you elaborate? What couldn't be reproduced? > Inside the FPGA, don't use vendor core generators. > Write your own code that infers what you need. Mike, I'd be interested in hearing why you think this, I'm not sure that I agree. Pierre-Olivier -- -- to email me directly, remove [N0SP4M] from my address --Article: 63776
> > CygWin is OK, but it's large and slow. For a faster and smaller unix-like > > shell you can consider Msys (from www.mingw.org) or actually Services for > > Unix from MS (http://www.microsoft.com/windows/sfu/default.asp). I myself > > use GNU make under plain vanilla CMD.EXE. It works fine for me, though > it's > > a bit annoying that Xilinx changes the command-line options in every > single > > release of their toolchain. > > What does 'make' do for you that batch files to run the tools doesn't? > > Not being aggressive - just curious in case I am missing out on some labour > saving functionality! > Not much, since xilinx tools need a complete rebuild of the project if a single source-file changes. (Due to the 'flattening' of the design.) It buys some degree of platform independence though. When I gave Linux a try it was much easier to port my design flow to that platform. Regards, Andras TantosArticle: 63777
First Call for Papers 7th Mil/Aerospace Applications of Programmable Logic Devices International Conference (MAPLD) Ronald Reagan Building and International Trade Center Washington, D.C. September 8-10, 2004 Hosted by the NASA Office of Logic Design The 7th annual MAPLD International Conference's extensive program will include presentations, seminars, workshops, and exhibits on programmable logic devices and technologies, digital engineering, and related fields for military and aerospace applications. Devices, technologies, logic design, flight applications, fault tolerance, usage, reliability, radiation susceptibility, and encryption applications of programmable devices, processors, and adaptive computing systems in military and aerospace systems are among the subjects for the conference. We are planning an exciting event with presentations by Government, industry, and academia, including talks by distinguished Invited Speakers. This conference is open to US and foreign participation and is not classified. For related information, please see the NASA Office of Logic Design Web Site (http://klabs.org). This year, there will be special emphasis on the following themes: • "War Stories" and Lessons Learned • Programmable Logic and Obsolescence Issues • Implementing high performance, high reliability processor cores. • Logic design evaluation, design guidelines, and recommendations. • Verification methods for radiation hardness and fault tolerance. • Applications such as MIL-STD interfaces, UAV's, and controllers. • Automated Checkers for low reliability design constructs. • PLD tools/methods that we need but vendors don't supply. CONFERENCE HOME PAGE - http://klabs.org/mapld04 - contains an abundance of information on both technical and programmatic aspects of the conference. SEMINARS - Two full-day seminars will be presented: • VHDL Synthesis for High-Reliability Systems • Aerospace Mishaps and Lessons Learned PANEL SESSION: • Why Are Next Generation Launch Vehicles and Spacecraft So Hard? WORKSHOPS & "BIRDS OF A FEATHER" SPECIAL SESSIONS • Mitigation Methods for Reprogrammable Logic in The Space Radiation Environment • Reconfigurable Computing - New Extended Format! • PLD Failures, Analyses, and the Impact on Systems - NEW for 2004!!! • Digital Engineering and Computer Design - A Retrospective and Lessons Learned for Today's Engineers • "An Application Engineer's View" - Back for 2004! TECHNICAL SESSIONS: • Applications: Military and Aerospace • Systems and Design Tools • Radiation and Mitigation Techniques • Processors: General Purpose and Arithmetic • Reconfigurable Computing, Evolvable Hardware, and Security • Poster Session INDUSTRIAL and GOVERNMENT EXHIBITS AND SPONSORS (early reservations, more to come): NASA Office of Logic Design Mentor Graphics Corporation Xilinx Corporation Synthworks Tensilica Actel Corporation Annapolis Microsystems Space Micro, Inc. SEAKR Engineering Aldec IEEE Aerospace and Electronics Systems Society Hier Design Global Velocity Lattice Semiconductor Quicksilver Technology Celoxica BAE Systems Nallatech The Andraka Consulting Group Aeroflex Synopsys Peregrine Semiconductor Starbridgesystems Condor Engineering For more information, please visit http://klabs.org/mapld04 or contact: Richard Katz - Conference Chair NASA Goddard Space Flight Center mapld2004@klabs.org Tel: (301) 286-9705Article: 63778
Hi, I have a problem on quartus II 3.0 software. I am using enhanced PLL and a set of input registers with FAST INPUT REG = on constraint. Thus i get about 1 ns delay of pad to IOB flop on datapath. However delay of clock from PLL to IOB flops is higher around 3 ns. Thus i get a hold violation of 2 ns. Anyone has any suggestions as to how i can eliminate this hold violation. ShardenduArticle: 63779
Hi all, Is the webpack able to write a RTL post-synthesis VHDL file? If yes, how to do? Thanks Laurent www.amontec.comArticle: 63780
Hi, I want to know how to properly interface between the CPU clock and the FPGA clock. My board has separate clocks for CPU and FPGA. Best Regards, KelvinArticle: 63781
Hi, I will use a Spartan-IIe in Master Serial Config Mode(000). What is the value of the CCLK-Pin after configuration (done high)? Is it tristated, pulled up, pulled down or toggling? Erik -- \\Erik Markert - student of Information Technology// \\ at Chemnitz University of Technology // \\ TalkTo: erma@sirius.csn.tu-chemnitz.de // \\ URL: http://www.erikmarkert.de //Article: 63782
Marc Randolph wrote: > Gernot Koch wrote: > >> Hi, >> >> I've tried to understand exactly which clocks a DCM will phase align >> when I do external or internal feedback by the book. I've read every >> bit of documentation I could find, but came out empty-handed. So maybe >> someone out there can help me out... >> >> This is the structure I have for internal feedback: >> >> module int_fb(clki, clko); >> input clki; >> output clko; >> wire clki_buf, clk_fb, clk_2x; >> assign clko = clk_fb; >> IBUFG ibufg0(.I(clki), .o(clki_buf)); >> DCM dcm0(.CLKIN(clki_buf), .CLKFB(clk_fb), .CLK2X(clk_2x)); >> BUFG bufg0(.I(clk_2x), .O(clk_fb)); >> endmodule >> >> Which wires are phase-aligned clocks here? > > > The output of the BUFG (clk_fb) will be phase aligned with your input > clock (clki). By connecting to the CLKFB input, the DCM removes the > phase offset introduced by routing, the BUFG, and the DCM itself. > > This results in the rising edge internal to the FPGA occurring at nearly > the same time as the rising edge of the clock feeding the FPGA... hence > you maintain a completely synchronous system. So it makes the board clock synchrounous to the internal clock. Can I use the CLK0 output of the DCM at the same time if I need both clock speeds inside the FPGA. Will both CLK0 and CLK2X then be phase aligned with clki? > > >> Also related: what does the DESKEW_ADJUST attribute do? The >> documentation I found says only how you set it, but not what it does... > > > The first two hits when typing in DESKEW_ADJUST in the Xilinx search > page seem to explain it pretty well: > > http://toolbox.xilinx.com/docsan/xilinx6/books/data/docs/cgd/cgd0086_39.html > > http://support.xilinx.com/xlnx/xil_ans_display.jsp?iLanguageID=1&iCountryID=1&getPagePath=14743 Well, I had seen those. However, they are quite vague about the effect of the DESKEW_ADJUST setting. I did not understand what precisely happens if you set that to SOURCE_SYNC vs. SYSTEM_SYNC. Nor is there any information what the other possible settings will result in. E.g. what would happen in the example above (internal feedback) if I choose one setting vs. the other?Article: 63783
> The reason for doing this is so that i can do a functional simulation > of my design prior to synthesising it - synthesis currently takes 4 > hours ! ! Ouch, that's quite a long time for synthesis...unless by synthesis you mean a design that's routed and ready to be downloaded into a FPGA. If by synthesis you mean the process of just mapping VHDL into FPGA primitives, before the place and routing is done, then perhaps instead of synthesizing the entire design as a whole, you could synthesize it in parts. I'm not sure how much improvement it would give you in your situation, but we find it helpful to do when Synplify is taking a long time to synthesize. Perhaps it's a memmory usage thing. Well that didn't answer your original question, but the 4 hours to synthesize stuck out at me. Regards, VinhArticle: 63784
I found the problem. The problem was that the bitfile also contained my application code (the .elf file was a mix from xmdstub and my application due to a mistake in the makefile). In EDK 3.2 the script bram_init.sh was used, which automatically took the right stuff to put in the bitfile (xmdstub or the executable). In EDK 6.1 this script is not used anymore. Since I run everything from the command line, I have to change the way for updating the bitfile in case of debugging and that was what I forgot... anyway, thanks for your help, Frank "Raj Nagarajan" <raj.nagarajan@xilinx.com> wrote in message news:3FCC0304.405645CC@xilinx.com... > Hi Frank, > > Frank wrote: > > > > Hi, > > > > since I upgraded the EDK 3.2 to 6.1 (with service pack 1) I'm having > > problems with debugging. To debug I do the following: > > > > - add next two lines to mss file > > > > PARAMETER DEFAULT_INIT = XMDSTUB > > PARAMETER DEBUG_PERIPHERAL = dbg_uart > > > > - set in makefile the optimization off (-O0) > > - set in makefile mode to xmdstub (instead of executable) > > - set in makefile compiler option -g (generating debug information) > > > > after downloading the bitfile, the program starts to run (very very slowly, > > but it runs! while I'm expecting that it should not run!). After starting > > If your download.bit contains xmdstub (typically the case for XMDSTUB > mode), the xmdstub starts to execute after the bitfile is downloaded.. > After the following changes in mss file, did you update your bitstream ? > > > xmd, I try to connect: > > > > mbconnect stub -comm serial -port com1 -baud 115200 > > > > and the next error was generated: > > > > XMD% mbconnect stub -comm serial -port com1 -baud 115200 > > ERROR: Unable to sync with stub on the board using the UART > > Closing serial port > > Unable to establish connection to xmdstub > > Unable to connect to MicroBlaze Target > > Looks like your download.bit file does not contain xmdstub. Updating the > bitstream should have xmdstub in your design. There is no change in > EDK6.1 and the design should work. > > -Raj > > > > > Does anyone have an idea? The described procedure was working with EDK 3.2 > > and ISE 5.2... > > > > Thanks, > > FrankArticle: 63785
Heya Kelvin, > I want to know how to properly interface between the CPU clock and the FPGA > clock. That's a pretty broad question. There are all sorts of solutions that vary in complexity. It will help if you can explain your design situation in more detail. What sort of data is flowing between the two clock domains? What's the interaction like? Regards, VinhArticle: 63786
Tobias, The VHD file is generated just for simulating the core, if you want to. According to the Coregen user guide, the XCO file is a log of the Coregen settings used to create your core, used in case you want to regenerate the same core with some modifications. All you really need is the EDN file to feed into the place and route tools. Not sure if I understood your question, but I hope that helps. Regards, VinhArticle: 63787
"Joel Kolstad" <JKolstad71HatesSpam@Yahoo.Com> wrote in message news:bqhflk$3nr$1@news.oregonstate.edu... > HI Simon, > > Simon Peacock <nowhere@to.be.found> wrote: > > Data is only sent at the speed it arrives at.. so although you are 1.5% > > fast.. you actually end up adding extra stop bits into the stream to > > compensate .. is simple and oh so clever. > > Well, fair enough, if you KNOW that your data source is truly supposed to be > 'nominally,' say, 9600bps, then you can be clever and get away with an > internal nominal bit rate of 9600*1.015=9744bps. However, if you tell me > you have such a device that you'd like to connect to one of my devices, > you'd better be able to convince me there really is no way your device could > start spewing data continuously at 9744 bps when my 9600 bps receiver isn't > going to be able to hack it! > > >This is exactly what modems did > > for years to cope with the V.14 shaved bits when they couldn't do them. > > Interesting; I didn't know that either! > if you start looking for a start bit 1/2 way thru the stop .. no problems :-) Since modems often use embedded processors.. which don't do fractional stop.. you have to do something.. so they mostly fudge it. Rockwell, Intel, silicon systems.. all have done it.. maybe even still do.. wouldn't surprise me if they did.. the first Rockwell 33k6 modems just used a fast version of their 2400 processors.. 6802 or something like that SimonArticle: 63788
Jake Janovetz wrote: > What do you folks use as a command line shell in Windows? I know > several people are working outside of Project Navigator (Xilinx) for > builds and it Windows is just not a very comforting environment for > shell folks. What 'make' utility do you use? > > Jake Cygwin. I can't live with a Windows machine without Cygwin installed. It has everything a Unix shell has, including tab-completion, history, ... GernotArticle: 63789
Hi I'm sorry to post this question here, I just wasn't able to find a suitable answer elsewhere. What is the actual (read : verifiable and/or presented in a report) gain in synthesis/p&r/map times and general usability of the ISE toolchain that can be achieved from using a dual-CPU modern PC (read : 3 GHz Pentium IV running under WinXP Pro) ? Is there actually something to gain from purchasing a second CPU (since the machine itself is bi-processorable) ? The typical device targetted device during development is a XC2V6000, so RAM will have to be in the 2/3 Gb range, but I'm especially interested in the CPU issue : 1 or 2 ? Thanks in advance to whoever might have an answer to this... EricArticle: 63791
On Thu, 04 Dec 2003 11:24:43 +0100, "Gernot Koch (remove digits from user)" <g1er3not.k5och88@micronas.com> wrote: >Jake Janovetz wrote: >> What do you folks use as a command line shell in Windows? I know >> several people are working outside of Project Navigator (Xilinx) for >> builds and it Windows is just not a very comforting environment for >> shell folks. What 'make' utility do you use? >> >> Jake > >Cygwin. I can't live with a Windows machine without Cygwin installed. Ditto. > ... including tab-completion, history, ... The windows command shell also has these features (at least in contemporary versions of windows). Tab-completion is disabled by default. Use regedit to change the value of HKEY_CURRENT_USER\Software\Microsoft\Command Processor\CompletionChar from 0 to 9 to enable it. It's not the same as tab-completion in bash, but it's better than nothing. Regards, Allan.Article: 63792
Eric, I don't think any of the tools currently available use the second processor for synth/P+R etc, but it means you can do other things while the machine's chomping away on your design. Nial ------------------------------------------------ Nial Stewart Developments Ltd FPGA and High Speed Digital Design Cyclone PCI development/eval board www.nialstewartdevelopments.co.ukArticle: 63793
I'm pretty new to FPGA-programming, and I've yet to understand quite of the steps I go through from compiling my vhdl-files, to finally generating bit-files. I'm working with a Xilinx II Pro FPGA, using ISE 6.1i, and I'm only starting to get familiar with the tools. But I'm having some problems understanding what all the (automated) steps do, and what they actually mean. What I want to know, is the following: 1. What does the "Translate" step do? What is actually produced by this step in the implementation ladder? 2. What happens in the "Map" stage, and what is produced? 3. What is left for "Place & Route" to do? (A whole lot I suppose, since it takes so long...) Well, you get the picture. I'm a genuine newbie, and I want to know what the he... is going on when I skillfully double click the "Implement Design" icon :p Any commens, or links to introductory guides will be greatly appreaciated. And I might as well warn you right away. I will probably bother you guys with questions about whe myriad of different files produced during the "implement design" process when I'm starting to undersand what is actually happening during that process. Sincerely -Fred, Norway.Article: 63794
I use dual processor machines running NT4/Win2000 with ISE. I have not checked the latest version fully but with previous versions the dual processor does not usually give greater performance on place and route. The second processor is useful when you want to something else other than the place and route. If you are running tools outside the graphical interface then you can do 2 things. I often run two MPPRs simultaneously or run a MPPR whilst using Timing Analyser on completed result. If you are after performance for a XC2V6000 design then consider more memory. Task manager will tell you how much is being used and if more would be useful. Be careful though, I believe NT4/WIn2000(Workstation) has a 2GB limit, I am not sure about XP limits. Linux can support more but I have not run this OS so I can't say any more about it. Look at your disk performance. Consider a RAID array if don't already have one. Fast disk access will help a small bit. John Adair Enterpoint Ltd. This message is the personal opinion of the sender and not that necessarily that of Enterpoint Ltd.. Readers should make their own evaluation of the facts. No responsibility for error or inaccuracy is accepted. "Eric BATUT" <grostuba@ifrance.com> wrote in message news:ee81679.-1@WebX.sUN8CHnE... Hi I'm sorry to post this question here, I just wasn't able to find a suitable answer elsewhere. What is the actual (read : verifiable and/or presented in a report) gain in synthesis/p&r/map times and general usability of the ISE toolchain that can be achieved from using a dual-CPU modern PC (read : 3 GHz Pentium IV running under WinXP Pro) ? Is there actually something to gain from purchasing a second CPU (since the machine itself is bi-processorable) ? The typical device targetted device during development is a XC2V6000, so RAM will have to be in the 2/3 Gb range, but I'm especially interested in the CPU issue : 1 or 2 ? Thanks in advance to whoever might have an answer to this... EricArticle: 63795
Hi Jian, > I have a project where one of the output signals is a short pulse with a > width of 100ns and repeat frequency of 1MHz. The main clock in my design is > 20MHz. After I download the program to the chip (EPM7128SLC-7) severe > overshoots (over 2V @VCC=5V)can be observed. Does anybody know an answer to > this problem? Thank you. This could be due to ground bounce or a badly matching trace impedance. I suggest that you turn on the 'Slow Slew rate' for this pin. It will make the level transitions of the pin slower by about 1ns (it's a bit asymmetrical) so that the device doesn't 'pull as hard' on the line. If that doesn't help, get back to us. Best regards, BenArticle: 63796
> And I might as well warn you right away. I will > probably bother you guys with questions about whe myriad of > different files produced during the "implement design" process > when I'm starting to undersand what is actually happening during > that process. Actually, I'll start right away: - Which process(es) create the ucf and pcf files? - Which process(es) use the ucf and pcf files? - Which process(es) will overwrite my manually created ucf and pcf files, and what can I do about it?Article: 63797
On Thu, 4 Dec 2003 02:30:30 -0800, "Eric BATUT" <grostuba@ifrance.com> wrote: > Is there actually something to gain from purchasing a second CPU > (since the machine itself is bi-processorable) ? The Xilinx tools are single threaded, but having two processors allows you to run two instances of the Xilinx software at the same time. This can halve the number of machines you need in your server farm, which saves a lot of money, power and space. In my current job, we have three designers running routes on one dual processor server. This works quite well. In my last job, we had (before the redundancies) about 25 designers using about 10 machines in the server farm. Most of the servers could handle two jobs at once. The fastest machines usually had one or two jobs running; the slower ones were rarely used. (Note: you'll need to have some custom software to allocate jobs to servers. This is a lot easier if you don't use the Xilinx GUI.) None of this will make much sense if you are a "one-person shop" with only a single computer. Regards, Allan.Article: 63798
Hello Fred, reading the sites that you find at http://toolbox.xilinx.com/docsan/xilinx5/manuals.htm makes you probably forget your questions :0) ChristianArticle: 63799
jwing23@hotmail.com (J-Wing) wrote in message news:<d6e7734d.0312020835.42729684@posting.google.com>... > The NIOS processor runs on a 33.333MHz clock. How can I increase the > speed of the clock and what is the maximum speed which can be > achieved? Please advice. Hi, One way of achiving this would be to remove the Y1 clock source and put a faster one on to this.(You need to check the schematics for pinning). Second question you can always check in the timing analys for fmax of your design, if I rember correctly you should be able to get 50-70MHz depending on design. (You need to change target freqency in the SOPC builder also). Cheers Fredrik
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z