Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Mike Treseler wrote: <snip> > note to vendors: > > When I need something from you, I check your > web site, not my old emails. Drop the > email spew, and use the money to get new > information to your websites quickly, > where I can find it easily, and without filling > out more forms. right on. i have recieved no junk from altera, but somehow i'm getting junk from xilinix, even though the web download of the fpga tool failed. do they think i am that impressed that they need to promote me to use their stuff? http://indi.joox.net altera all the way. cheers p.s. even lattice are better on the speil side with no excess shite.Article: 109801
Mike Treseler wrote: > burn.sir@gmail.com wrote: > > > Would this go away if I somehow "flattened" my design? > > I would back out the changes one at a time > to find the culprit. Maybe adding the entities > created a pipeline stage somehow. > Also check the rtl viewer. > > -- Mike Treseler Actually, both designs are _indentical_. I know that because we have some really tough tests for them. In addition, I spent 3-4 hours manually inspecting the code, the tests and all the simulation logs.. What happened is that the synthesis tool stopped optimising after a certain depth. At least, that is my theory... burnsArticle: 109802
cbr_929rr wrote: > I am looking for ideas on how to implement SMPTE310 Tx and Rx > interface. > > Algorithm/VHDL/Verilog examples would be helpful. > Thanks. is this the MIDI variant?Article: 109803
It is a Sync. serial interface to transport MPEG2. DVB-ASI (async. serial interface) is the other popular standard. jacko wrote: > cbr_929rr wrote: > > I am looking for ideas on how to implement SMPTE310 Tx and Rx > > interface. > > > > Algorithm/VHDL/Verilog examples would be helpful. > > Thanks. > > is this the MIDI variant?Article: 109804
Antti wrote: > Kolja Sulimma schrieb: > > >>rickman schrieb: >> >>>Over the years I have gotten a lot of junk email from Xilinx to email >>>addresses that I have given out only to support and never to any >>>marketing channel. I have always been disappointed that Xilinx has >>>done this. But now they have sunk to a new low, they are giving or >>>selling my email address to third party junk emailers. >> >>I see that too. >>I think it is bad business practice. It does not make me feel that I can >>share sensitive information with them in a webcase or similar. >> >>And who knows whom Foundation ISE sends your source codes to. >>If a company sees cloak and dagger methods as part of their business >>model you can not trust them anymore. >> >>Kolja Sulimma > > > and did you notice that in ISE 8.2 there is now also a WEB-TALKBACK > feature that ask you to submit your design statistics to Xilinx !? > (similar to Altera software) > > however if in Altera enabling the talkback enables the free use of jtag > logic analyzer features then the talkback in Xilinx is simple > additional burden for you and you get nothing in return ! > > Antti > Because ISE uses the IE ActiveX control (on Windoze), it is source blocked from any internet access on this system. They don't feel the need on Linux, (obviously, because it doesn't exist) so why use it on Windoze? Hint: ActiveX _is_ the problem. Any process that uses the IE activeX controls is blocked from access in both directions, so it's not just Xilinx, but it does seem silly. Why not just invoke web requests through $BROWSER? So - no, Xilinx gets no statistics from me, even though I might actually be willing to share. Cheers PeteSArticle: 109805
On Oct 3, 11:16 am, "johnp" <johnp3+nos...@probo.com> wrote: > I'm seeing some unexpected behavior when using the Xilinx TNM > constraint in a ucf file. > [...] > I then tried: > NET "u_os_if/rd_burst" TNM = x_rio_tx; > NET "u_fx2if/reg_brst_rd" TNM = x_rio_tx; > TIMESPEC "ts_os_x3" = FROM x_rio_tx to > FFS(upo_joey_if/xmit_data<*>) = 12; > > and I was VERY surprised at the result. Instead of applying the > constraint > from the rd_burst and reg_brst_rd signals to the xmit_data<*> signals, > it > instead constrained a path from xmit_data<8> to xmit_data<15>. > > Is this yet another Xilinx bug or am I mis-using the TNM constraint? > > Thanks! > > John Providenza The TNM grouping constraint, when applied to a net, collects all instances that fan _forward_ from the net, but does not collect the source that drives the net. To put the sources of nets "u_os_if/rd_burst" and "u_fx2if/reg_brst_rd" into group "x_rio_tx", you might most easily use TNM applied to the primitive instances directly: INST "u_os_if/rd_burst" TNM x_rio_tx; INST "u_fx2if/reg_brst_rd" TNM x_rio_tx; (assuming that _net_ u_os_if/rd_burst is sourced by a _FF_ with the same name, and similarly for the other FF output). But be careful if you are using this in a UCF file. FFs may be duplicated during synthesis: you may have both a FF named "u_os_if/rd_burst" and a FF named "u_os_if/rd_burst_1", where the first FF controls some of your data bits, and the second controls others. In this case, you might want to collect your sources using wildcards: INST "u_os_if/rd_burst*" TNM x_rio_tx; INST "u_fx2if/reg_brst_rd*" TNM x_rio_tx; If your synthesis tool names duplicate FFs in a rational manner (and you've named your signals so the wildcard expressions don't include unwanted devices), this would collect all the ones you want into timegroup x_rio_tx. Suppose the net rd_burst is not a FF output, but a combinatorial? Then you'd have to find all FFs that source the combinatorial, and collect them into x_rio_tx. As far as I know, you have to do this manually; to my knowledge, there is no _backward_ propagating group collector that does this. (It is not a simple proposition given the way synthesis tools can mangle a design from how you would like/expect it to be elaborated.) HTH, Just JohnArticle: 109806
> It lacks nothing. In fact any design that does NOT generate a synchronously > timed trailing edge reset is 'lacking' and will eventually fail because at > some point that trailing edge of reset will come along at a time relative to > the clock that violates setup/hold timing requirements and cause a flip flop > or two to go to the wrong state. This is all assuming that the clock is > free running (or at least running at the trailing edge of reset). I can think of a couple of exceptions where a completely asynchronous reset won't cause any problems: 1. A free-running counter that has no unrecoverable illegal states (i.e. a bad state that doesn't eventually turn into a good state) and no requirement to be immediately operational after reset. 2. An FPGA that is waiting for an external event (like a discrete input or CPU register write) to start doing something, and the external event happens after the reset. I can't think of an FPGA design that I've worked on where one of those options didn't apply. There's also a little bit of assumption with #1 and an HDL- that the compiler doesn't do anything funny, like recognize that a simple up or down counter in the code can be replaced by an LFSR, in which case there can be illegal states. So for designs that meet those options, there's no need to panic and do a massive product recall. Yet for current & future designs, synchronizing the trailing edge of a reset signal is a great idea.Article: 109807
The following is an informal letter to Xilinx requesting their continued efforts to increase the speed of their software tools. If there are incorrect or missing statements, please correct me! Dear Xilinx: As many of us spend numerous hours of our life waiting for Map/Par/Bitgen to finish, I hereby petition Xilinx, Inc., to consider this issue (of their tool speed) to be of the highest priority. I am now scared to purchase newer chips because I fear that their increased size and complexity will only further delay my company's development times. Please, please, please invest the time and money to make the tools execute faster. Have you considered the following ideas for speeding up the process? 1. The largest benefit to speed would be obtained through making the tools multithreaded. Upcoming multi-core processors will soon be available on all workstation systems. What is it that is causing Xilinx years on end to make their tools multithreaded? There is no excuse for this. I assume the tools are written in C/C++. Cross platform C/C++ threading libraries make thread management and synchronization easy (see boost.org). 2. Use a different algorithm. I understand that the tools currently rely on simulated Annealing algorithms for placement and routing. This is probably a fine method historically, but we are arriving at the point where all paths are constrained and the paths are complex (not just vast in number). If there is no value in approximation, then the algorithm loses its value. Perhaps it is time to consider a Branch and Bound algorithm instead. This has the advantage of being easily threadable. 3. SIMD instructions are available on most modern processors. Are we taking full advantage of them? MMX, SSE1/2/3/4, etc. 4. Modern compilers have much improved memory management and compilation over those of previous years. Also, the underlying libraries for memory management and file IO can have a huge impact on speed. Which compiler are you using? Which libraries are you using? Have you tried the latest PathScale or Intel compilers? 5. In recent discussions about the speed of the map tool, I learned that it took an unearthly five minutes to simply load and parse a 40MB binary file on what is considered a fairly fast machine. It is obviously doing a number of sanity checks on the file that are likely unnecessary. It is also loading the chip description files at the same time. Even still, that seems slow to me. Can we expand the file format to include information about its own integrity? Can we increase the file caches? Are we using good, modern parser technology? Can we add command line parameters that will cause higher speed at the cost of more memory usage and visa-versa? Speaking of command line parameters, the software takes almost three seconds to show them. Why does it take that long to simply initialize? 6. Xilinx's chips are supposedly useful for acceleration. If so, make a PCIe x4 board that accelerates the tools using some S3 chips and SRAM. I'd pay $1000 for a board that gave a 5x improvement. (okay, so that is way less than decent simulation tools, I confess I'm not willing to pay big dolla....) 7. Is Xilinx making its money on software or hardware? If it is not making money on software, then consider making it open source. More eyes on the code mean more speed. Sincerely, An HDL peonArticle: 109808
Brannon - Although I'd like the tools to run faster, I think it is *far* more important for Xilinx to fix the numerous bugs and crashes. Yet again I've had to completely re-build a project because Navigator corrupted the .ise file and the backup version. Make the tools work, then speed them up. John Providenza Brannon wrote: > The following is an informal letter to Xilinx requesting their > continued efforts to increase the speed of their software tools. If > there are incorrect or missing statements, please correct me! > > Dear Xilinx: > > As many of us spend numerous hours of our life waiting for > Map/Par/Bitgen to finish, I hereby petition Xilinx, Inc., to consider > this issue (of their tool speed) to be of the highest priority. I am > now scared to purchase newer chips because I fear that their increased > size and complexity will only further delay my company's development > times. Please, please, please invest the time and money to make the > tools execute faster. > > Have you considered the following ideas for speeding up the process? > > 1. The largest benefit to speed would be obtained through making the > tools multithreaded. Upcoming multi-core processors will soon be > available on all workstation systems. What is it that is causing Xilinx > years on end to make their tools multithreaded? There is no excuse for > this. I assume the tools are written in C/C++. Cross platform C/C++ > threading libraries make thread management and synchronization easy > (see boost.org). > 2. Use a different algorithm. I understand that the tools currently > rely on simulated Annealing algorithms for placement and routing. This > is probably a fine method historically, but we are arriving at the > point where all paths are constrained and the paths are complex (not > just vast in number). If there is no value in approximation, then the > algorithm loses its value. Perhaps it is time to consider a Branch and > Bound algorithm instead. This has the advantage of being easily > threadable. > 3. SIMD instructions are available on most modern processors. Are we > taking full advantage of them? MMX, SSE1/2/3/4, etc. > 4. Modern compilers have much improved memory management and > compilation over those of previous years. Also, the underlying > libraries for memory management and file IO can have a huge impact on > speed. Which compiler are you using? Which libraries are you using? > Have you tried the latest PathScale or Intel compilers? > 5. In recent discussions about the speed of the map tool, I learned > that it took an unearthly five minutes to simply load and parse a 40MB > binary file on what is considered a fairly fast machine. It is > obviously doing a number of sanity checks on the file that are likely > unnecessary. It is also loading the chip description files at the same > time. Even still, that seems slow to me. Can we expand the file format > to include information about its own integrity? Can we increase the > file caches? Are we using good, modern parser technology? Can we add > command line parameters that will cause higher speed at the cost of > more memory usage and visa-versa? Speaking of command line parameters, > the software takes almost three seconds to show them. Why does it take > that long to simply initialize? > 6. Xilinx's chips are supposedly useful for acceleration. If so, make > a PCIe x4 board that accelerates the tools using some S3 chips and > SRAM. I'd pay $1000 for a board that gave a 5x improvement. (okay, so > that is way less than decent simulation tools, I confess I'm not > willing to pay big dolla....) > 7. Is Xilinx making its money on software or hardware? If it is not > making money on software, then consider making it open source. More > eyes on the code mean more speed. > > Sincerely, > An HDL peonArticle: 109809
Brannon wrote: > The following is an informal letter to Xilinx requesting their > continued efforts to increase the speed of their software tools. If > there are incorrect or missing statements, please correct me! > > Dear Xilinx: > > As many of us spend numerous hours of our life waiting for > Map/Par/Bitgen to finish, I hereby petition Xilinx, Inc., to consider > this issue (of their tool speed) to be of the highest priority. I am > now scared to purchase newer chips because I fear that their increased > size and complexity will only further delay my company's development > times. Please, please, please invest the time and money to make the > tools execute faster. > > Have you considered the following ideas for speeding up the process? > > 1. The largest benefit to speed would be obtained through making the > tools multithreaded. Upcoming multi-core processors will soon be > available on all workstation systems. What is it that is causing Xilinx > years on end to make their tools multithreaded? There is no excuse for > this. I assume the tools are written in C/C++. Cross platform C/C++ > threading libraries make thread management and synchronization easy > (see boost.org). > 2. Use a different algorithm. I understand that the tools currently > rely on simulated Annealing algorithms for placement and routing. This > is probably a fine method historically, but we are arriving at the > point where all paths are constrained and the paths are complex (not > just vast in number). If there is no value in approximation, then the > algorithm loses its value. Perhaps it is time to consider a Branch and > Bound algorithm instead. This has the advantage of being easily > threadable. > 3. SIMD instructions are available on most modern processors. Are we > taking full advantage of them? MMX, SSE1/2/3/4, etc. > 4. Modern compilers have much improved memory management and > compilation over those of previous years. Also, the underlying > libraries for memory management and file IO can have a huge impact on > speed. Which compiler are you using? Which libraries are you using? > Have you tried the latest PathScale or Intel compilers? > 5. In recent discussions about the speed of the map tool, I learned > that it took an unearthly five minutes to simply load and parse a 40MB > binary file on what is considered a fairly fast machine. It is > obviously doing a number of sanity checks on the file that are likely > unnecessary. It is also loading the chip description files at the same > time. Even still, that seems slow to me. Can we expand the file format > to include information about its own integrity? Can we increase the > file caches? Are we using good, modern parser technology? Can we add > command line parameters that will cause higher speed at the cost of > more memory usage and visa-versa? Speaking of command line parameters, > the software takes almost three seconds to show them. Why does it take > that long to simply initialize? > 6. Xilinx's chips are supposedly useful for acceleration. If so, make > a PCIe x4 board that accelerates the tools using some S3 chips and > SRAM. I'd pay $1000 for a board that gave a 5x improvement. (okay, so > that is way less than decent simulation tools, I confess I'm not > willing to pay big dolla....) > 7. Is Xilinx making its money on software or hardware? If it is not > making money on software, then consider making it open source. More > eyes on the code mean more speed. > > Sincerely, > An HDL peon > Xilinx has already sped up time to completion at the cost of poorer end performance on some high performance designs. I'll take PAR that takes longer to complete but gets closer to the timing previous versions got on hand-placed designs over having to run a faster PAR numerous times in order to get a routing solution that meets timing. I have to wonder whether the writer of this letter looked at his own design for the reasons PAR was taking too long. Did he keep the levels of logic to a reasonable number for his desired timing target? Did he duplicate logic to reduce high fanout nets? Did he try any floorplanning for critical parts of the design? Somehow I doubt it, yet those things can make a several orders of magnitude difference in the time to run PAR.Article: 109810
On Thu, 05 Oct 2006 10:34:22 GMT, "KJ" <kkjennings@sbcglobal.net> wrote: >The next question to ponder is, given that the reset must be synchronized >anyway, why use an asynchronous reset anywhere in your design (with the >exception of course of the above mentioned synchronizer)? People will >pontificate on the subject but have yet to produce any coherent reason to >prefer async or sync. Or they will claim some logic resource advantage that >when you test the claim is found to be baseless (in almost every case, the >logic and performance is darn near identical). How about this: In an FPGA all the flops are already async (unlike ASICs) so you're getting hit by the potential slow setup, clk->q issue already so you might as well take advantage of it because going to sync reset will need a mux at the input. In an ASIC, depending on the library, adding the reset condition mux at the input of non-reset flops may give you an advantage or not.Article: 109811
johnp wrote: > Brannon - > > Although I'd like the tools to run faster, I think it is *far* more > important for Xilinx to fix the numerous bugs and crashes. > > Yet again I've had to completely re-build a project because Navigator > corrupted the .ise file and the backup version. > > Make the tools work, then speed them up. > > John Providenza > > > Brannon wrote: > >>The following is an informal letter to Xilinx requesting their >>continued efforts to increase the speed of their software tools. If >>there are incorrect or missing statements, please correct me! >> >>Dear Xilinx: >> >>As many of us spend numerous hours of our life waiting for >>Map/Par/Bitgen to finish, I hereby petition Xilinx, Inc., to consider >>this issue (of their tool speed) to be of the highest priority. I am >>now scared to purchase newer chips because I fear that their increased >>size and complexity will only further delay my company's development >>times. Please, please, please invest the time and money to make the >>tools execute faster. >> >>Have you considered the following ideas for speeding up the process? >> >>1. The largest benefit to speed would be obtained through making the >>tools multithreaded. Upcoming multi-core processors will soon be >>available on all workstation systems. What is it that is causing Xilinx >>years on end to make their tools multithreaded? There is no excuse for >>this. I assume the tools are written in C/C++. Cross platform C/C++ >>threading libraries make thread management and synchronization easy >>(see boost.org). >>2. Use a different algorithm. I understand that the tools currently >>rely on simulated Annealing algorithms for placement and routing. This >>is probably a fine method historically, but we are arriving at the >>point where all paths are constrained and the paths are complex (not >>just vast in number). If there is no value in approximation, then the >>algorithm loses its value. Perhaps it is time to consider a Branch and >>Bound algorithm instead. This has the advantage of being easily >>threadable. >>3. SIMD instructions are available on most modern processors. Are we >>taking full advantage of them? MMX, SSE1/2/3/4, etc. >>4. Modern compilers have much improved memory management and >>compilation over those of previous years. Also, the underlying >>libraries for memory management and file IO can have a huge impact on >>speed. Which compiler are you using? Which libraries are you using? >>Have you tried the latest PathScale or Intel compilers? >>5. In recent discussions about the speed of the map tool, I learned >>that it took an unearthly five minutes to simply load and parse a 40MB >>binary file on what is considered a fairly fast machine. It is >>obviously doing a number of sanity checks on the file that are likely >>unnecessary. It is also loading the chip description files at the same >>time. Even still, that seems slow to me. Can we expand the file format >>to include information about its own integrity? Can we increase the >>file caches? Are we using good, modern parser technology? Can we add >>command line parameters that will cause higher speed at the cost of >>more memory usage and visa-versa? Speaking of command line parameters, >>the software takes almost three seconds to show them. Why does it take >>that long to simply initialize? >>6. Xilinx's chips are supposedly useful for acceleration. If so, make >>a PCIe x4 board that accelerates the tools using some S3 chips and >>SRAM. I'd pay $1000 for a board that gave a 5x improvement. (okay, so >>that is way less than decent simulation tools, I confess I'm not >>willing to pay big dolla....) >>7. Is Xilinx making its money on software or hardware? If it is not >>making money on software, then consider making it open source. More >>eyes on the code mean more speed. >> >>Sincerely, >>An HDL peon > > The crashes are, no doubt, because of the increasing complexity of each part of the process required to be evaluated by the tools. The *nix way was always 'do one thing and do it well' which used to exemplify the Xilinx tools. As they have got more complex, they have added things to each tool, such that they are now doing more than one thing. Adding such complexity adds exponential sources of problems. I suggest each tool be completely re-evaluated - and if it's doing more than one thing, separate those things back out - to 'Do one thing and do it well'. That would very probably deal with a lot of the crashes, and ultimately speed. Cheers PeteSArticle: 109812
I back up both requests: Xilinx and Altera should make there tools faster, especially make usage of multi-cores. I hope that there is something coming soon, as this trend as clear since over a year. E.g. I wrote on 1st March 2005 to this newsgroup: > I think we should all encourage the FPGA- and EDA-tool-vendors to adapt > there software for parallel algorithms (especially place and route), as > the > dual-cores are really coming soon and most of us will buy the fastest > machine they can get for reasonable money. In fact, a parallel algorithm > would already help a little bit today for P4s with hyper-threading. Also Xilinx should improve their software quality, with every new ISE you get new errors into your designs that worked fine with previous releases. It's frustrating... In doubt I would also recommend to concentrate on the old bugs first before introducing new ones with multithreading... (my Xilinx-designs are not that large ;-) Thomas "johnp" <johnp3+nospam@probo.com> schrieb im Newsbeitrag news:1160077316.480007.91410@m7g2000cwm.googlegroups.com... > Brannon - > > Although I'd like the tools to run faster, I think it is *far* more > important for Xilinx to fix the numerous bugs and crashes. > > Yet again I've had to completely re-build a project because Navigator > corrupted the .ise file and the backup version. > > Make the tools work, then speed them up. > > John Providenza > > > Brannon wrote: >> The following is an informal letter to Xilinx requesting their >> continued efforts to increase the speed of their software tools. If >> there are incorrect or missing statements, please correct me! >> >> Dear Xilinx: >> >> As many of us spend numerous hours of our life waiting for >> Map/Par/Bitgen to finish, I hereby petition Xilinx, Inc., to consider >> this issue (of their tool speed) to be of the highest priority. I am >> now scared to purchase newer chips because I fear that their increased >> size and complexity will only further delay my company's development >> times. Please, please, please invest the time and money to make the >> tools execute faster. >> >> Have you considered the following ideas for speeding up the process? >> >> 1. The largest benefit to speed would be obtained through making the >> tools multithreaded. Upcoming multi-core processors will soon be >> available on all workstation systems. What is it that is causing Xilinx >> years on end to make their tools multithreaded? There is no excuse for >> this. I assume the tools are written in C/C++. Cross platform C/C++ >> threading libraries make thread management and synchronization easy >> (see boost.org). >> 2. Use a different algorithm. I understand that the tools currently >> rely on simulated Annealing algorithms for placement and routing. This >> is probably a fine method historically, but we are arriving at the >> point where all paths are constrained and the paths are complex (not >> just vast in number). If there is no value in approximation, then the >> algorithm loses its value. Perhaps it is time to consider a Branch and >> Bound algorithm instead. This has the advantage of being easily >> threadable. >> 3. SIMD instructions are available on most modern processors. Are we >> taking full advantage of them? MMX, SSE1/2/3/4, etc. >> 4. Modern compilers have much improved memory management and >> compilation over those of previous years. Also, the underlying >> libraries for memory management and file IO can have a huge impact on >> speed. Which compiler are you using? Which libraries are you using? >> Have you tried the latest PathScale or Intel compilers? >> 5. In recent discussions about the speed of the map tool, I learned >> that it took an unearthly five minutes to simply load and parse a 40MB >> binary file on what is considered a fairly fast machine. It is >> obviously doing a number of sanity checks on the file that are likely >> unnecessary. It is also loading the chip description files at the same >> time. Even still, that seems slow to me. Can we expand the file format >> to include information about its own integrity? Can we increase the >> file caches? Are we using good, modern parser technology? Can we add >> command line parameters that will cause higher speed at the cost of >> more memory usage and visa-versa? Speaking of command line parameters, >> the software takes almost three seconds to show them. Why does it take >> that long to simply initialize? >> 6. Xilinx's chips are supposedly useful for acceleration. If so, make >> a PCIe x4 board that accelerates the tools using some S3 chips and >> SRAM. I'd pay $1000 for a board that gave a 5x improvement. (okay, so >> that is way less than decent simulation tools, I confess I'm not >> willing to pay big dolla....) >> 7. Is Xilinx making its money on software or hardware? If it is not >> making money on software, then consider making it open source. More >> eyes on the code mean more speed. >> >> Sincerely, >> An HDL peon >Article: 109813
burn.sir@gmail.com wrote: > Mike Treseler wrote: > > burn.sir@gmail.com wrote: > > > > > Would this go away if I somehow "flattened" my design? > > > > I would back out the changes one at a time > > to find the culprit. Maybe adding the entities > > created a pipeline stage somehow. > > Also check the rtl viewer. > > > > -- Mike Treseler > > > Actually, both designs are _indentical_. I know that because we have > some > really tough tests for them. In addition, I spent 3-4 hours manually > inspecting the code, the tests and all the simulation logs.. > > > > What happened is that the synthesis tool stopped optimising after > a certain depth. At least, that is my theory... > > > burns Just curiosity, have you been able to re-route the old design and got the same result as before?Article: 109814
Just wondering here... What platform are you running the tools on? I benchmarked several chip design tools on 'doze and 'nix. If the tool stayed inside physical memory, they ran at about even speeds. I found that once physical memory was exhausted, the 'nix variant would run at least 10x faster. An hour-plus simulation would finish in under 5 minutes. I traced the difference to the memory manager. In the 'doze variant, less than 10% of the processor was left for the application to use while the swapper was running. This may have changed in the last couple of years, but I doubt it. GHArticle: 109815
> I have to wonder whether the writer of this letter looked at his own > design for the reasons PAR was taking too long. Did he keep the levels > of logic to a reasonable number for his desired timing target? Did he > duplicate logic to reduce high fanout nets? Did he try any > floorplanning for critical parts of the design? Somehow I doubt it, yet > those things can make a several orders of magnitude difference in the > time to run PAR. My logic and fannouts are fine. I confess, though, that I have never done floorplanning. I wouldn't even know where to start with it. I don't even know what level floorplanning is done at. I rarely use XST; I use my own EDIF generation tools. The tools I use tile out vast amounts of logic recursively; attaching location constraints to them is quite difficult (their names change each compile, the flattened logic does not always look the same, etc.). My impression was that floorplanning required constraints at some level and that it is difficult using XST as well. Is that not true? I don't doubt that my top-level tool choice is hindering my ability to take full advantage of the Xilinx tools. How much time would it take to do a floorplan on a four million gate project? At what stage during the development process do you do it? I'm trying to increase my development time, not my retargetability time.Article: 109816
Hello David, You can use the TCL interface to automate the memory update. Use quartus_sh --qhelp to get the help on the "insystem_memory_edit" TCL package. This package is only available in the shell provided by quartus_stp.exe. If you are using an external logic analyzer, you can install the free small standalone programmer (https://www.altera.com/support/software/download/programming/quartus2/dnl-= quartus2_programmer.jsp) with SignalTap II on the logic analyzer. This package includes the quartus_stp.exe executable. If you are using SignalTap II Logic Analyzer, the acquisition can be started in quartus_stp.exe as well using the TCL command from the "stp" package. Hope this helps, Subroto Datta Altera Corp. On Oct 5, 1:54=C2=A0am, "david" <1024.da...@gmail.com> wrote: > hello > thank you for your replay > > we allready tried this option before, but we have a problam, because we > need to modify the data in the memory while the system is runing ( the > in system memory size is to small for ower application), every 512 > clock cycles. > can we update the memory automaticly with new data from predefine files > (hex files) while the system is runing? (it's not practiclly to rewrite > manually every 512 clocks cycles, we need the system to run at least > for 32768 clock cycles continusly, we can spend clock cycles as need to > rewrite the content of the memory) > > thanks > david > > Subroto Datta =D7=9B=D7=AA=D7=91: > > > > > It is definitely possible to update the memory and constants in a progr= ammed > > device from Quartus using the In System Memory Content Editor. Details = can > > be found at: > > >http://www.altera.com/literature/hb/qts/qts_qii53012.pdf > > > You can use this in conjunction wiith SignalTap II Embedded logic analy= zer > > to debug your work. > > > Hope this helps, > > Subroto Datta > > Altera Corp. > > > "david" <1024.da...@gmail.com> wrote in message > >news:1159976602.300018.42380@e3g2000cwe.googlegroups.com... > > > hello > > > i am a student, working on development kit nios 2 cyclone edition. > > > i want to use the logic analyzer to import data to the fpga from the > > > logic analyzer, can i do it?- Hide quoted text -- Show quoted text -Article: 109817
Hi Brannon, So, I guess we'd all like the tools to run faster, and you make some good suggestions. However, I wonder how often you _need_ to do a PAR cycle? Please excuse me if I'm teaching you to suck eggs, but I just want to check you've considered a development process where you simulate things before PAR. This way, your logic errors are found in the simulator, not the real hardware. If you like to try stuff out as you go, maybe you could run the PAR each evening before heading out to the pub, that's what I sometimes do. :-) HTH, Syms.Article: 109818
Hi everybody, I'm going to build a USB JTAG cable and I don't know what protocol should I use for communication. The only cable I can fully implement this time is the Digilent USB cable... (according to the code of Zoltan Csizmadia's cable server project) I was trying to reverse engineer Xilinx's USB Platform cable protocol, but after a point I gave up. (Mostly becouse the firmware of the cable is shipped with the PC software, so they can easily modify the protocol in the new version of ISE) So, some questions: Is Digilent's protocol free to implement in my device? Is there any cable with open protocol? (with a GPL like license) If exists an open protocol cable, is this well supported by applications? Thanks, gossArticle: 109819
jacko wrote: > hi > > 283 LEs > 6% Cyclone II EP2C5T144C6 > 44 Warnings > Still unverified. just retargeted to MAX II to see how it went. looks good. 386LEs 17% MAX II EPM2210F256C3 > yes got it to compile at last after discovering a bit more about buses > and naming of them. > > TEST16 toplevel object schematic encapsulates the indi16 with reset > logic and tristate databus, to connect with external pins on the chip. > > Hopefully not a lot more editing needs to be done for the basic design, > and i am now in a position to start puting together more architecture > documentation. > > cheers > > p.s. the cpu is suitable for having a C compilier written for it and > will not be limited to forth. > > http://indi.joox.net > http://indi.microfpga.com > http://indi.hpsdr.com > > all three url work but some do not have directory listing yet.Article: 109820
When I was using VCS to simulate complicated stuff before, it took several hours per run. I agree that the output was infinitely more useful. However, have you seen the prices on such EDIF tools? You can run a lot of 10 minute comiples before you pay for a $10k piece of software. And that's a cheap one. Symon wrote: > Hi Brannon, > So, I guess we'd all like the tools to run faster, and you make some good > suggestions. > However, I wonder how often you _need_ to do a PAR cycle? Please excuse me > if I'm teaching you to suck eggs, but I just want to check you've considered > a development process where you simulate things before PAR. This way, your > logic errors are found in the simulator, not the real hardware. If you like > to try stuff out as you go, maybe you could run the PAR each evening before > heading out to the pub, that's what I sometimes do. :-) > HTH, Syms.Article: 109821
Antti wrote: > to my surprise I just heard comments about Xilinx giving up hard > processor cores - well I think its not so, and that V5-FX has PPC440 > cores in it, but isnt it about time that there would be some > information update when V5FX is actually coming? Hang on... I'm still waiting for the V4 FX line to be fully rolled out.. Cheers, JonArticle: 109822
I would settle for an incremental compile that worked without me spending more time screwing with that then it takes to just do a fresh compile each time. There's no reason that changing one pin assignment or one DCM parameter should cost 10 minutes time. You can talk about simulation ahead of time but (a) that doesn't work too well when you're doing signal processing, it would take me a month to build a testbench that would tell me what five minutes of a live system tells me and (b) when it comes to integration time there are always nits and nats that have to be tweaked (wrong pin assignment, CE to a chip is wrong polarity, etc.) . -ClarkArticle: 109823
Brannon wrote: >>I have to wonder whether the writer of this letter looked at his own >>design for the reasons PAR was taking too long. Did he keep the levels >>of logic to a reasonable number for his desired timing target? Did he >>duplicate logic to reduce high fanout nets? Did he try any >>floorplanning for critical parts of the design? Somehow I doubt it, yet >>those things can make a several orders of magnitude difference in the >>time to run PAR. > > > My logic and fannouts are fine. I confess, though, that I have never > done floorplanning. I wouldn't even know where to start with it. I > don't even know what level floorplanning is done at. I rarely use XST; > I use my own EDIF generation tools. The tools I use tile out vast > amounts of logic recursively; attaching location constraints to them is > quite difficult (their names change each compile, the flattened logic > does not always look the same, etc.). My impression was that > floorplanning required constraints at some level and that it is > difficult using XST as well. Is that not true? I don't doubt that my > top-level tool choice is hindering my ability to take full advantage of > the Xilinx tools. How much time would it take to do a floorplan on a > four million gate project? At what stage during the development process > do you do it? I'm trying to increase my development time, not my > retargetability time. > The time spent doing floorplanning depends on how hierarchical your design is and whether you take advantage of the hierarchy while doing your floorplanning. In your case, it sounds like you are building a design out of nearly identical (?) tiles, each comprised of xilinx primitives. If that is the case, yours is an ideal candidate for floorplanning, as it is fairly easy to add the placement constraints to the edif netlist at the time of compilation. Most of my floorplanning is done in the source where I floorplan the elements at that hierarchical level. When I get to the top level design, the floorplanning amounts to placing a relatively small number of pre-placed tiles. The levels of logic and fanouts comment is relative to your projected speed. The more slack you have there, the faster the tool will complete. If the timing is tight, then it takes the tool much longer. Also, the automaticplacer is notoriously bad at a few things: 1) placement of multiplier and BRAM blocks, 2) placement of data width elements (ie it doesn't line up the bits horizontally in a chain of adders), and 3) placing LUTs that don't have registers between them...i.e. if you have two levels of logic between flip-flops, the second lut (the closest to the next flip-flop) is placed well, but the LUT feeding a LUT is often placed far away. This absolutely kills timing, and it takes the tool a very long time to anneal the LUTs back to a reasonable location. Keeping logic to one level between flip-flops or hand placing the LUTs solves that. Unfortunately, the LUT names and logic distribution over a LUT tree is the stuff that changes the most from synthesis run to synthesis run, so floorplanning LUT locations in multi-level logic is probably the hardest part of floorplanning.Article: 109824
Brannon wrote: > When I was using VCS to simulate complicated stuff before, it took > several hours per run. I agree that the output was infinitely more > useful. However, have you seen the prices on such EDIF tools? You can > run a lot of 10 minute comiples before you pay for a $10k piece of > software. And that's a cheap one. > > Symon wrote: You might look at the Aldec simulator. The version I use does mixed language simulation including edif netlists. I don't know the current price, but I believe it is still well under $10K.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z