Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
> > > Simple question! > > > > > > My design simulates with modelsim and fits OK with webpack. > > > > > > My question is how do I re-import my design into modelsim to check it > > still > > > works? > > > > > > Cheers > > > Dave > > > > Why would you want to do that? > > > > There should be no need for timed stand-alone simulations provided you used > > TIMESPECs, and your TIMESPECs are accurate. > > ... and the tools don't have bugs. > > ... and the speed files accurately reflect the operation of the silicon. > > Bitten by both on my last project. Ouch. > > In the speed file case, a gate level simulation won't help, as the back > annotated VHDL (or Verilog, etc.) will have the values from the speed > files, not the values from the silicon. The only way to catch these is > to try them in the lab, over temperature and voltage variations. > > The gate level simulation was useful for picking up some of the tool > bugs. Others bugs were more amenable to inspections using FPGA Editor, > or netlist browsing. Please name the tools and tool bugs you found doing timed simulations, as well as the bugs you found using FPGA editor. I know MANY designers who do exactly as I stated and have never had a problem. If you do a back annotated simulation, you do not have to run this as a timed simulation, you can still do it as unit delay. As you said, the speedfile issue is a non-issue, simply because you are using the speedfiles TO generate the timing information for the back annotated simulation. So, why even bring that up?Article: 31651
I just read this morning that this device is the only one which won't burn up because of shorts if random configuration bit streams are written to it. "Michael Strothjohann" <strothjohann@rheinahrcampus.de> wrote in message news:3B177CB9.648E6B2@rheinahrcampus.de... > > > Thomas Karlsson schrieb: > > > > Why isn't there any information about it on the Xilinx website? > > Is the device obsolete? > > Hi Thomas, > > academic users didnt order high volumes (bad news) and > real (industrial) designers didnt love 6200 (very bad news). > xilinx like high volumes, so 6200 is dead now. > simple. very simple. > > michael strothjohannArticle: 31652
In reply to: http://www.etin.com/article/Article.jsp?messageID=25364105&folder=comp.arch.fpga So, does anyone know off top of their head what devices are being used for evolutionary algorithms and/or partial reconfigurability research. Victor ---- Posted via http://www.etin.com - the FREE public USENET portal on the Web Complete SEARCHING, BROWSING, and POSTING of text and BINARY messages!Article: 31653
"Ben" <ejhong@future.co.kr> wrote in message news:<3VkR6.218$by4.4024@news2.bora.net>... > Hi, > > I use two independent clocks and an asynchronous fifo from synopsys > DesignWare in my design. > This means there is frequent data transfer between the clock domains. > During the simulation, a fliflop which gets signal from another flipflop of > different clock domain generated setup time violation error. > The flipflop must be some kind of synchronizer for internal control of fifo. > I think this kind of error is ubiquitous for 2-clock design or any design > that has synchronizer for asynchronus signal in it, and believe that there > is some technique which will prevent or bypass timing error during > simulation. > Please help me out of this problem. > > Thanks, > > Ben I believe Synopsys DW FIFOs have internal synchronizers, so synchronizing the signals cross the two clock domains should not be a concern if the DW FIFOs work as Synopsys specifies. I don't know which simulator and HDL (verilog or VHDL) you are using. For Verilog, if you just think the timing error messages annoying, you can remove $setuphold() line in the simulation model for the flipflop. If you are using Synopsys vcs and doing functional simulations, you can simply turn off timing check by using +nospecify and +notimingchecks options at compile time. JimArticle: 31654
Allan Herriman wrote: > Austin Franklin wrote: > > > > > Hi again, > > > > > > Simple question! > > > > > > My design simulates with modelsim and fits OK with webpack. > > > > > > My question is how do I re-import my design into modelsim to check it > > still > > > works? > > > > > > Cheers > > > Dave > > > > Why would you want to do that? > > > > There should be no need for timed stand-alone simulations provided you used > > TIMESPECs, and your TIMESPECs are accurate. > > ... and the tools don't have bugs. > > Amen to that! But of course just to keep you on your toes they sometimes hide bugs in BITGEN where nothing but a few hair-tearing hours on the bench followed by some further depressing days talking to the support line can find them [e.g. 2.1iSP? DLL bug. I didn't find it myself but my heartfelt sympathy went out to whoever did].Article: 31655
Given the current climate: Check out the company's financial statements, if they are publicly listed then check out their details on Yahoo Finance and read the comments board. Look to see the last time they had any layoffs... Choose the one that does better! -- ___#--- Andrew MacCormack andrewm@tality.com L_ _| Senior Design Engineer | | Tality, Alba Campus, Livingston EH54 7HH, Scotland ! | Phone: +44 1506 595360 Fax: +44 1506 595959 T A L I T Y http://www.tality.comArticle: 31656
"Austin Franklin" <austin@dar54kroom.com> writes: > And this is a HUGE beef I've had with Xilinx for nearly ten years! Why is > the global reset signal NOT hard routed using a LOW SKEW net? Come on, this > isn't rocket science guys! I characterized this back on the 4k series in > the early 90's. I actually found that the GSR in a 4010 had nearly 80ns > skew across the part! You've got lots of clocks routed with low skew nets, > why has this not been done with the global reset? Is there any way you can use one of the clock nets for reset?Article: 31657
Hello All, I have just encountered a problem simulating my Viewdraw schematics that contain VirtexII flip flop symbols. It appears that the viewdraw symbols use the @INIT parameterized attributes to control the initial state of flip flops but the simulation netlister VSM does not parse them correctly. I get the following error from VSM. ER Error: viewbase: Error 338: Undefined variable(s) encountered (FLOATVAL=@INIT). | Occurred on net instance $1I1\$1I15\GSR_INIT0 I realize that it is unfashionable to use schematic entry but I have some large blocks of IP that I want to carry forward from an old Virtex design. If anyone knows a workaround for this one I would appreciate a hint. Thanks, -- Pete Dudley Sandia Labs Albuquerque, NM padudle@sandia.govArticle: 31658
Thanks Rick, I'll give it a try. Dave "Rick Filipkiewicz" <rick@algor.co.uk> wrote in message news:3B16F0AD.6B720E70@algor.co.uk... > > > Speedy Zero Two wrote: > > > Hi again, > > > > Simple question! > > > > My design simulates with modelsim and fits OK with webpack. > > > > My question is how do I re-import my design into modelsim to check it still > > works? > > > > Cheers > > Dave > > Assuming you mean how do you generate a Verilog/VHDL file that represents the > fitted/routed design ? > > If so then check out the docs on the 2 utilities NGDANNO & NGD2VER/NGD2VHD. The > first one generated a timing annotated .nga file & the second takes the .nga & > produces both a Verilog simulation model and an SDF timing file. > > This applies to FPGAs, for CPLDs its slightly different since the TSIM utility > is used to produce the .nga. > >Article: 31659
Hi, does anyone have experience/more info on Xilinx's BITGEN option for bitstream compression? I would like to save Flash memory space in an embedded application and wonder -what the compression rate is -what the overhead is for decompression (is this built into the device or do I have to do that from the uP?) Any help would be appreciated. Werner KittingerArticle: 31660
Werner, Bitstream compression uses something called MultiFrame writes. This means that if two (or more) configuration frames have the same data, they can all be written with one command. However, this does mean that the compression rate will be almost totally dependent on the design itself. If you have a design that uses only a small portion of the device, the compression rate will likely be extraordinarily high. But if you have a 99% utilized device, you may get no real space savings at all. Probably the best thing to do is to size your flash after most of the fpga design is complete- then you will have a good idea of your real bitstream size. HTH, MikeArticle: 31661
Eric Smith <eric-no-spam-for-me@brouhaha.com> writes: > "Austin Franklin" <austin@dar54kroom.com> writes: > > And this is a HUGE beef I've had with Xilinx for nearly ten years! Why is > > the global reset signal NOT hard routed using a LOW SKEW net? Come on, this > > isn't rocket science guys! I characterized this back on the 4k series in > > the early 90's. I actually found that the GSR in a 4010 had nearly 80ns > > skew across the part! You've got lots of clocks routed with low skew nets, > > why has this not been done with the global reset? > > Is there any way you can use one of the clock nets for reset? No you can't. At lest not in Vortex and Sparran-II. I just looked at the Jbits docs. FF set/reset can only be driven from the GSR or the "SR" input to the CLB. SR can not be driven from the clock lines. There are 4*6 transistors per CLB missing there. :-( Dito for the CE input. That would also be nice, together with driving individual colums clock lines separately. -- Neil Franklin, neil@franklin.ch.remove http://neil.franklin.ch/ Hacker, Unix Guru, El Eng HTL/BSc, Sysadmin, Archer, Roleplayer - Intellectual Property is Intellectual RobberyArticle: 31663
Yes, that was its claim to fame. There was only one possible driver per metal line, so there could not possibly be any contention. But, the product is dead, not manufactured and not supported anymore. Peter Alfke, Xilinx Applications =================================== Dave Feustel wrote: > I just read this morning that this device is > the only one which won't burn up because > of shorts if random configuration bit streams > are written to it. >Article: 31664
I would choose Option 1 if the company is in good financial standing and if the work environment is great. If they are doing verification using something other than Verilog/VHDL which you already know e.g. PLI, C/C++, Vera, Specman, SystemC, Superlog you will learn a lot more to enhance your career. My employer tells me that it is very difficult to find good, experienced design verification engineers with strong technical backgrounds... Wish you all the best... ASIC Engineer wrote: > Hello Gurus, > I am an ASIC Designer with 3+ years of experience and am > about to make a next move. Here is where I need your suggestions. I > have 2 (atleast) competetive offers which differ in the work nature. > > Option 1.> Has SoC design + verification (major part) for Ethernet > related communications chip. > > Option 2.> Is in developing RTL (+synthesis, simulation etc.) USB 2.0 > is something they said they are developing. Also some maintanance of > previous peripherals (USB) is going on. They are also developing > interface blocks to act as interface from their properitory bus > standards to AMBA bus. The design work would be roughly 30% they say. > > I am OK with VHDL & Verilog. > > Given the above situation I am slightly confused about which one to > take. > > I would be thankful if you could share your views on this. > > Regards, > ASIC Engineer > > P.S. Sorry for this non-technical post, didn't know where else to ask > for.Article: 31665
How or where I can get Z180 or Z182 core??? Thanks Iouri CBFalconer wrote: > Richard Erlacher wrote: > > > > On Tue, 29 May 2001 13:06:57 GMT, "Mark Walter" <maw@nospam.com> > > wrote: > > > > >It appears that the tools for the SFL language support conversion to Verilog > > >or VHDL. Is it possible someone can convert this 8080 processor core into > > >Verilog or VHDL. Then it could be used with the Xilinx tools for creating a > > >8080 clone... > > > > Since there are still V40 and V50 devices available, albeit with some > > difficulty, but at a small fraction of the cost of a major FPGA, it > > might be well to examine one of those processors for use of its > > built-in 8080 core. You can, if you like, use the 8086-compatible > > instruction set to execute what will probably have to be an original > > BIOS, yet use the internal 8080 to execute the CP/M code. Of course > > it's not Z-80 compatible, but who cares? > > That only makes sense if you want the 8086 instruction set also. > Otherwise why not just use a Z180? All Z80's execute the 8080 > instruction set, with the exception being some parity bit > settings, and nobody in their right mind ever wrote code that ran > into that after the first z80's came out. > > -- > Chuck F (cbfalconer@my-deja.com) (cbfalconer@XXXXworldnet.att.net) > http://www.qwikpages.com/backstreets/cbfalconer :=(down for now) > (Remove "NOSPAM." from reply address. my-deja works unmodified) > mailto:uce@ftc.gov (for spambots to harvest)Article: 31666
Iouri Besperstov wrote: > > CBFalconer wrote: > > > Richard Erlacher wrote: > > > > > > On Tue, 29 May 2001 13:06:57 GMT, "Mark Walter" <maw@nospam.com> > > > wrote: > > > > > > >It appears that the tools for the SFL language support conversion to Verilog > > > >or VHDL. Is it possible someone can convert this 8080 processor core into > > > >Verilog or VHDL. Then it could be used with the Xilinx tools for creating a > > > >8080 clone... > > > > > > Since there are still V40 and V50 devices available, albeit with some > > > difficulty, but at a small fraction of the cost of a major FPGA, it > > > might be well to examine one of those processors for use of its > > > built-in 8080 core. You can, if you like, use the 8086-compatible > > > instruction set to execute what will probably have to be an original > > > BIOS, yet use the internal 8080 to execute the CP/M code. Of course > > > it's not Z-80 compatible, but who cares? > > > > That only makes sense if you want the 8086 instruction set also. > > Otherwise why not just use a Z180? All Z80's execute the 8080 > > instruction set, with the exception being some parity bit > > settings, and nobody in their right mind ever wrote code that ran > > into that after the first z80's came out. > > How or where I can get Z180 or Z182 core??? Try zilog. http://www.zilog.com Or almost any distributor. -- Chuck F (cbfalconer@my-deja.com) (cbfalconer@XXXXworldnet.att.net) http://www.qwikpages.com/backstreets/cbfalconer :=(down for now) (Remove "NOSPAM." from reply address. my-deja works unmodified) mailto:uce@ftc.gov (for spambots to harvest)Article: 31667
It is very interesting that they have a special cell for synchrnonizers though I don't find any special cell in my ASIC library. Fortunately, I could just avoid this timing error during verilog prelayout simulation by using characteristics of the two clocks. One clock domain of my design was synthesized barely fit into 30ns (33MHz PCI clock) and the other was 40 ns. Thus if I use two clocks of period 30ns and 40ns each, and make the initial rising edge of each clock to coincide, then I have 10ns margin for any neighboring rising edge of the two clocks. This blew out my agony. Ben Paul Campbell <paul@verifarm.com>ÀÌ(°¡) ¾Æ·¡ ¸Þ½ÃÁö¸¦ news:gqlR6.730$SP2.291672@news.pacbell.net¿¡ °Ô½ÃÇÏ¿´½À´Ï´Ù. > Ben wrote: > > > Hi, > > > > I use two independent clocks and an asynchronous fifo from synopsys > > DesignWare in my design. > > This means there is frequent data transfer between the clock domains. > > During the simulation, a fliflop which gets signal from another flipflop > > of different clock domain generated setup time violation error. > > The flipflop must be some kind of synchronizer for internal control of > > fifo. I think this kind of error is ubiquitous for 2-clock design or any > > design that has synchronizer for asynchronus signal in it, and believe > > that there is some technique which will prevent or bypass timing error > > during simulation. > > yup you're right - it's common to use a special cell for these sorts of > synchronizers - one that is more metastability resistant - its verilog > model is usually without setup/hold checks and a long clk->Q. The synopsys > version of the cell also usually has a long clk->Q and you need to set a > false path to its input (make sure its input is only driven directly from a > single flop in the other domain - no combinatorial logic that can make the > chance of metastability higher) > > PaulArticle: 31668
Hi Do you use the project manager foundation 3.1? Actually the project manager foundation 3.1 doesn't support Xilinx 6200.It supports Xilinx 4000 and 3000 series. The software supports Xilinx 6200 is XACT6000 and maybe HOT WORKS board. The question is: how can I tell the floor planning inside the chip using XACT6000?Is there a FPGA editor inside it? sincerely ------------- Kuan Zhou ECSE department On Fri, 1 Jun 2001, Thomas Karlsson wrote: > Hi, > > Actually, I have never heard of a Xilinx 6200 family! Do you mean the 5200 > family? > Otherwise please tell me what the 6200 family is. > > If you want to examine how the Place&Route tool have mapped the logic into > the CLBs, > then you could use the Xilinx tool FPGA Editor. In this tool you can examine > each CLB in detail, > the equations for the look-up tables, routing between CLBs, etc. > The input file for this tool is the <design_name>.ncd > > Regards > /Thomas > > "Kuan Zhou" <zhouk@rpi.edu> wrote in message > news:Pine.SOL.3.96.1010530182531.15124A-100000@rcs-sun2.rcs.rpi.edu... > > Hi, > > I am a guy who is looking at the performance of > > the Xilinx 6200 chips. > > When I download the compiled bit streams into > > Xilinx 6200 chip,Is there any tool or file for me to > > easily tell how the circirts are mapped in the > > Xilinx 6200 chip?I want to know the functions > > of each CLB in the chip during the application. > > Is there any data sheet describing that? > > > > > > sincerely > > ------------- > > Kuan Zhou > > > > > > > > > >Article: 31669
Hi, What is the CMOS technology for Xilinx 6200 series chips?0.5 micron or something else? sincerely ------------- Kuan Zhou ECSE departmentArticle: 31670
> > > to develop a W2K driver > > > for the Spartan II PCI core. > > > > Why would you need a driver for the PCI core? > > > Because I don't have one. Actually I need a driver to drive everything > that's behind the PCI core. But before I can drive that I first have to > access the PCI core itself and I don't have code for that. > > Clemens The PCI core itself, as far as PCI accessibility, is really only configuration space. I don't believe there's really anything to access there for a "driver"... If you mean access the resources that are behind the BARs etc. of course, that makes sense...but there isn't anything in the core it self that requires a driver, per se, I don't believe, and they would be particular to the application the back end is designed for, not the core. I don't understand what would be particular to "the PCI core", as opposed to any PCI device...the real particulars for a driver are what's in the memory and I/O space, and that's outside the core.Article: 31671
To start with, you probably want to use separate gray coded counters for each address. Also, the flag generation math is gray coded math too. As you figured, you probably want to be careful with the flags. I typically check/set the flags for multiple cycles in one domain, and register them in the domain I am using the flag in. One thing to remember, is some of the flags need to have the respective counter enable signal in their equations...it'll become obvious as you build it why. "Jamil Khatib" <khatib@ieee.org> wrote in message news:3B16BA19.BDF5EA0D@ieee.org... > Hi, > I am trying to implement a FIFO buffer with two different clocks for > read and write. I am going to use Dual port memory core but I do not > know how to handle the flags and how to track number of bytes in the > buffer. > moreover how can I avoid metastability on the flags > > Regards > Jamil Khatib >Article: 31672
Hi, ASIC libraries usually proivde PCI pads, and I think these pads are adequate to be used for all the bidirectional PCI pins such as AD, FRAME#, IRDY#, etc. But I don't know if I can use the PCI bidirectional pad for input only pins like CLK, RST#, IDSEL, and GNT#(in case of a non-arbiter device.) In my opinion, normal CMOS pads (non-PCI) should be used for CLK, RST#, and IDSEL because they are usually from system, not from PCI bus. GNT# also often comes from non PCI device. Am I right? BenArticle: 31673
Ben wrote: > ASIC libraries usually proivde PCI pads, and I think these pads are adequate > to be used for all the bidirectional PCI pins such as AD, FRAME#, IRDY#, > etc. > But I don't know if I can use the PCI bidirectional pad for input only pins > like CLK, RST#, IDSEL, and GNT#(in case of a non-arbiter device.) > In my opinion, normal CMOS pads (non-PCI) should be used for CLK, RST#, and > IDSEL because they are usually from system, not from PCI bus. GNT# also > often comes from non PCI device. > Am I right? You should use PCI pads for all PCI signals. If your lib doesn't offer input/output-only pads, pick the bidir one and tie the enable bit to 0/1. Bidir I/O buffers are a bit slower than in/out-only cells, but not too much. Lars -- Address: University of Mannheim; D7, 3-4; 68159 Mannheim, Germany Tel: +(49) 621 181-2716, Fax: -2713 email: larsrzy@{ti.uni-mannheim.de, atoll-net.de, computer.org} Homepage: http://mufasa.informatik.uni-mannheim.de/lsra/persons/lars/Article: 31674
Austin Franklin wrote: > The PCI core itself, as far as PCI accessibility, is really only > configuration space. I don't believe there's really anything to access > there for a "driver"... If you mean access the resources that are behind > the BARs etc. of course, that makes sense...but there isn't anything in the > core it self that requires a driver, per se, I don't believe, and they would > be particular to the application the back end is designed for, not the core. > > I don't understand what would be particular to "the PCI core", as opposed to > any PCI device...the real particulars for a driver are what's in the memory > and I/O space, and that's outside the core. Ahh. At least you need a driver to find out to what address your PCI device was mapped. This would pretty much concern the core. Also you would like to enable and disble memory ranges, enable interupts, burst mode, .... Core functions too. When all this is done, all there is left to do to access you card is to map the memory ranges to user space. (Ooops, another core related function) After that, you can start using the functionality that you connected to the core. Kolja Sulimma
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z