Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Kamal Patel wrote: > Giuseppe, > > This is correct. Windows XP and Windows 2000 will > be the only supported Microsoft operating systems > for 5.1i, although 5.1i should still install and > run fine on Windows NT. > > Regards, > Kamal Patel > Xilinx Apps Oh bloody brilliant :-((! Drop support for the only (nearly) reliable O/S MS have ever managed to produce in 20 years of hacking [and only got there 'cos DEC were using it as well]. That W2K sign on ``built on NT technology'' makes me spit with anger every time I see it.Article: 47201
http://www.xilinx.com/partinfo/ds003.htm John wrote: > I'm having difficulty finding information on XCV600's on the Xilinx site. Part of the problem is I know nothing about this type of part, including terminology. I have two boards with this part, but do not know how to check if they are the same revision, whether the firmware (?) is the same, or where to get the firmware, if it is compatible. Any constructive suggestions would be appreciated. -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 47203
Ahhhh, that would be why you guys are paying a million bucks for an ASIC! When your design is fully synchronous, timing closure is nearly automatic. "Muzaffer Kal" <kal@dspia.com> wrote in message news:l7mlou47nfvldqko1fn4lg2ddv4q7sreod@4ax.com... > On Fri, 20 Sep 2002 03:55:02 GMT, Ray Andraka <ray@andraka.com> wrote: > > >In Peter's defense, I see several designs a year that claim to have intention > >to move to an ASIC. Few if any ever do. Worse, if you code for an easy > >transition to an ASIC, you leave many of the FPGA features on the table, > >resulting in a design that is both bigger and slower than it would be for a > >design specifically targeted to the FPGA. My advice if you intend to go that > >route is to design specifically to the FPGA, then later if you decide to go to > >ASIC start fresh. > > Actually there is no reason to start fresh with the ASIC. Anything you > can do with any FPGA, you can do in standard cell easily (maybe except > the 16 bit memory blocks, larger size SRAM is relatively easy). So it > makes sense to use all features of the FPGA and then map them to > standard cell. I'd bet one can do better with a .25u standard cell > ASIC than any optimized FPGA at .13u process. One exception maybe the > multipliers in Virtex-II or better the DSP block in Stratix. > Especially the latter takes advantage of fully custom hard macro in a > .13u 8LM copper process so it would be difficult to do better with > .25u. But having the full flexibility of having all the metal layers > only to the routing of hard wired gates still might be an advantage. > > Muzaffer Kal > > http://www.dspia.com > ASIC/FPGA design/verification consulting specializing in DSP algorithm implementationsArticle: 47204
This is my last note on this thread. Chip express doesn't say it takes 30 masks. Sounds like he's getting the bendover special if he's paying a million for a fully synchronous, pure digital asic, even if you had a million asic gates, it shouldn't cost that much. IMHO. BB ============================================= "rickman" <spamgoeshere4@yahoo.com> wrote in message news:3D8AB923.2A77F16D@yahoo.com... > Peter Alfke wrote: > > > > Blackie Beard wrote: > > > > > Keep your design synchronous, or it will be > > > a real beach to get an ASIC made later. <snip> > > > > How many people can still afford an ASIC? > > > > I don't want to start a flame, am just curious. > > At 130 nm just the mask set (~30 masks) is close to a million dollars, > > excluding design and verification efforts (That's a basic fact, we pay this all > > the time)... > > Is this irrelevant? > > Just asking, I may be living in a biased environment. Please no flames ! > > > > Peter Alfke > > So just go with a 210 nm process where they are begging you to use their > fabs. Maybe they are having a special and will give the masks away for > free!!! B^) > > The OP was using an XCS05! I don't think that is done in anything close > to a 130 nm process, is it? What are we talking, 250 nm or bigger, > right? > > -- > > Rick "rickman" Collins > > rick.collins@XYarius.com > Ignore the reply address. To email me use the above address with the XY > removed. > > Arius - A Signal Processing Solutions Company > Specializing in DSP and FPGA design URL http://www.arius.com > 4 King Ave 301-682-7772 Voice > Frederick, MD 21701-3110 301-682-7666 FAXArticle: 47205
Blackie Beard wrote: > > Ahhhh, that would be why you guys are paying a million bucks for an ASIC! > When your design is fully synchronous, . > Anything complicated enough to make an ASIC out of would tend to have multiple clock domains, etc. It would tend to be synchronous within each domain, but if "timing closure is nearly automatic" were true then why do we at Tality get paid piles of dosh to do the layout for so many designs? -- ___#--- Andrew MacCormack andrewm@tality.com L_ _| Senior Design Engineer | | Tality, Alba Campus, Livingston EH54 7HH, Scotland ! | Phone: +44 1506 595360 Fax: +44 1506 595959 T A L I T Y http://www.tality.comArticle: 47206
Ray Andraka <ray@andraka.com> wrote: > I think synplicity does now too. My point is I'd like to see it in the LRM so > that the code can be made portable between tools. 7.2.0 seems to accept reals for constants; previous versions didn't, unfortunately. I'm constantly amazed at how badly these tools support data types that are only used to generate constants. Hamish -- Hamish Moffatt VK3SB <hamish@debian.org> <hamish@cloud.net.au>Article: 47207
Umm, I didn't mean to imply that the FPGA designs are not synchronous...We do all of our designs strictly synchronous. The design rules are basically ASIC design rules. The trouble comes in when we instantiate FPGA primitives, which is something we do to put placement in the code and enforce use of specific FPGA features in order to reduce the time to a design that works at the top end of the performance and density envelope for FPGAs. An FPGA design is generally pipelined much deeper than an ASIC, uses ripple carry logic in all the arithmetic and uses internal memory that may be expensive to implement in an ASIC (we make extensive use of the dynamic delay feature of the SRL16 element to optimize the design, for example). Even at a strictly RTL level coding, the FPGA architectures tend to favor much deeper pipelining and no gated clocks in an FPGA, which is no good for low power ASICs. Blackie Beard wrote: > Ahhhh, that would be why you guys are paying a million bucks for an ASIC! > When your design is fully synchronous, timing closure is nearly automatic. > > "Muzaffer Kal" <kal@dspia.com> wrote in message > news:l7mlou47nfvldqko1fn4lg2ddv4q7sreod@4ax.com... > > On Fri, 20 Sep 2002 03:55:02 GMT, Ray Andraka <ray@andraka.com> wrote: > > > > >In Peter's defense, I see several designs a year that claim to have > intention > > >to move to an ASIC. Few if any ever do. Worse, if you code for an easy > > >transition to an ASIC, you leave many of the FPGA features on the table, > > >resulting in a design that is both bigger and slower than it would be for > a > > >design specifically targeted to the FPGA. My advice if you intend to go > that > > >route is to design specifically to the FPGA, then later if you decide to > go to > > >ASIC start fresh. > > > > Actually there is no reason to start fresh with the ASIC. Anything you > > can do with any FPGA, you can do in standard cell easily (maybe except > > the 16 bit memory blocks, larger size SRAM is relatively easy). So it > > makes sense to use all features of the FPGA and then map them to > > standard cell. I'd bet one can do better with a .25u standard cell > > ASIC than any optimized FPGA at .13u process. One exception maybe the > > multipliers in Virtex-II or better the DSP block in Stratix. > > Especially the latter takes advantage of fully custom hard macro in a > > .13u 8LM copper process so it would be difficult to do better with > > .25u. But having the full flexibility of having all the metal layers > > only to the routing of hard wired gates still might be an advantage. > > > > Muzaffer Kal > > > > http://www.dspia.com > > ASIC/FPGA design/verification consulting specializing in DSP algorithm > implementations -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 47208
The point is that a design for maximum FPGA performance/density includes architectural decisions that may not map very efficiently to an ASIC. Take a DSP design for example. In the FPGA, we add pipeline stages as much as possible before and after arithmetic in order to minimize the delays added onto the carry chain. In an ASIC, you don't have built-in carry chains, and you don't have the rigid routing delay structure so it often makes sense to use things like Wallace trees where it didn't in the FPGA. The design constraints, and cost functions for different structures are different for an ASIC than they are for an FPGA. Sure, you can port a design for one to the other but it is not a free lunch. ASIC to FPGA will usually wind up much larger and slower than is possible with a design specifically targeted to the FPGA. FPGA to ASIC can also be done, but it may require limits on the FPGA primitives allowed or construction of special library elements to map those primitives to. Not a problem for basic logic (although it may be pipelined much deeper than necessary), more of a problem for the arithmetic just from an area and speed perspective, and a bigger problem for clock DLLs, CLBRAMs, SRL16's and dual ported memories. I'm not saying that it can't be done, just that in the long run I think the cost of the reengineering effort is negligable and provides benefits worth more than the cost for those who can truely justify going to an ASIC. Muzaffer Kal wrote: > On Fri, 20 Sep 2002 03:55:02 GMT, Ray Andraka <ray@andraka.com> wrote: > > >In Peter's defense, I see several designs a year that claim to have intention > >to move to an ASIC. Few if any ever do. Worse, if you code for an easy > >transition to an ASIC, you leave many of the FPGA features on the table, > >resulting in a design that is both bigger and slower than it would be for a > >design specifically targeted to the FPGA. My advice if you intend to go that > >route is to design specifically to the FPGA, then later if you decide to go to > >ASIC start fresh. > > Actually there is no reason to start fresh with the ASIC. Anything you > can do with any FPGA, you can do in standard cell easily (maybe except > the 16 bit memory blocks, larger size SRAM is relatively easy). So it > makes sense to use all features of the FPGA and then map them to > standard cell. I'd bet one can do better with a .25u standard cell > ASIC than any optimized FPGA at .13u process. One exception maybe the > multipliers in Virtex-II or better the DSP block in Stratix. > Especially the latter takes advantage of fully custom hard macro in a > .13u 8LM copper process so it would be difficult to do better with > .25u. But having the full flexibility of having all the metal layers > only to the routing of hard wired gates still might be an advantage. > > Muzaffer Kal > > http://www.dspia.com > ASIC/FPGA design/verification consulting specializing in DSP algorithm implementations -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 47209
At that low (10 Hz is almost glacial) sample rate, it seems you could use a DSP micro instead. Our rule of thumb is that if you can do it in a single DSP microprocessor, don't use an FPGA. The design talent for a software solution is cheaper and easier to find than hardware designers who understand DSP. That said, if you do do this in an FPGA, you can use a single multiplier plus EAB memories (I believe you stated a preference for Altera) in a structure that is not much different than what you would do in the microprocessor. In effect what you are doing is time multiplexing all the taps through a single multiplier. If not fast enough, you can duplicate the multiplier as needed. At much faster sample rates, you would want to shift to a distributed arithmetic type architecture provided you had a way to update the coeficients at the desired rate. In Altera parts that is tough because you don't have a wayto update the LUT equations. In xilinx, you can replace the DA LUTs with SRL16 shift register elements, which when not shifting behave exactly like a LUT. This provides a reload mechanism for the LUTs that takes 16 clocks to reload each LUT. Using this method, it is possible to build adaptive DA filters in Xilinx parts. Dongho wrote: > Actually I'm trying to implement LMS FIR filter with 100 input > channels. > So there will be 100 adaptive FIR filters(with 10 taps) and > adder(which will sum the 100 FIR output and subtract the desired input > from there). > And I need to update the weights(coef) of each 100 FIR filter. > sampling rate : 10Hz(100ms) so I need to update weights within 100ms. > precision : input(16bits), coefficient(24bits) > > thanks > > Ray Andraka <ray@andraka.com> wrote in message news:<3D88F8FE.C559AC67@andraka.com>... > > You don't mention the required performance. It makes a huge difference. > > Particularly, you need to state the sample rate as well as the > > coefficient update rate. It would also help to know the precision (bits) > > of the inputs and coefficients. There are ways to do this, subject to > > certain restrictions. Let's see what your requirements are first. > > > > Dongho wrote: > > > > > Hi, > > > > > > I'd like to implement adaptive FIR filter with 100 tap in FPGA. > > > To implement this, I need 100 multiplier to multiply weight. > > > How to measure the area(or gate?) to measure this filter so that I > > > can find appropriate chip? > > > Thanks in advance for your response. > > > > > > -Regards > > > > > > dongho > > > > -- > > --Ray Andraka, P.E. > > President, the Andraka Consulting Group, Inc. > > 401/884-7930 Fax 401/884-7950 > > email ray@andraka.com > > http://www.andraka.com > > > > "They that give up essential liberty to obtain a little > > temporary safety deserve neither liberty nor safety." > > -Benjamin Franklin, 1759 -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 47210
Hi, I am looking at the specifications for the spartan2e in a dash 7 speed grade part. The spec sheet says maximum toggle frequency (for export control) is 400MHz. What is the intent of the "for export control" statement. Is that a US government requirement or does it mean "export" as in "off-chip"? I suspect the former is the case. My high-speed pulse width timer design uses a pair of DLL's in the high-speed mode. The first acts as a doubler, the second just generates clean 0 and 180 degree outputs. I then buffer these two outputs and use them to drive a pair of 2 bit pre-scalers. Currently, the pre-scalers run at 204.8 MHz with the initial input at 102.4 MHz. I am considering the possibility of doubling the input clock to 204.8 MHz and running the prescalers at 409.6 MHz. By the way, I know I could have used 4 counters with the 0, 90, 180, and 270 outputs from the first DLL. The problem was in selecting and routing the appropriate buffers for the 4 phases. By the time I got them routed, the phase delays were no longer identical. The two phase system works much better. Thanks, Theron HicksArticle: 47211
BUFGs drive the clock networks only and can't be used for anything else. These have inputs inside the part, so they can be driven by anything in the FPGA. IBUFGs are the input buffers on the 4 GCLK pins. They are special I/O to the device that are optimized for clock. Unlike the other I/O they do not have registers and are input only. These are normally associated with clocks, but can also be used for general purpose inputs, subject to the limitation that you can't register in the IOB. Laurent Gauch wrote: > Rajeev, > > (1) Fro me , the IBUFG drives the global clock. These global clock may > come from an internal FPGA signal or from the 4 specific GCLK pads (for > XCV). But a IBUFG is dedicated for each GCLK pads, also each GCLK pads > can ONLY be routed to the IBUFG. > > Maybe : if 3 GCLK pads are used as clock inputs, in this case 3 > corresponding IBUFGs are used. IF I use the last IBUFG for an internal > clock source, the Reset signal (or other asynchronous signal) on the > corresponding last GCLK pad can not be routed! > > Is that true? > > Laurent Gauch, Amontec > > Rajeev wrote: > > > Laurent, > > > > As I understand it, > > (1) IBUFG does not drive a global clock net (that's BUFG's job) > > (2) IBUFG output can drive non-clock inputs > > so you can use IBUFG for your enable signal, as I did with an eval > > board where the reset button was wired to a GCLK pin. You get a > > warning message (IBUFG driving non-clock inputs) which may be ignored. > > > > Hope this helps, > > -rajeev- > > ------------------ > > Laurent Gauch <laurent.gauch@amontec.com> wrote in message news:<3D8825C8.90009@amontec.com>... > > > >>Hi all, > >> > >>I am designing on XCV600 and I have to use an GCLK input as a standard > >>input. > >> > >>On VCV600 we have 4 IBUFG > >>My design uses 3 GCLK pins for 3 different clocks, these 3 GCLK pins are > >>directly connected to 3 IBUFG. > >>I divide one of my 3 Clocks by two via a Flip-flop (I cannot use DLL > >>part because I have to produce an ASIC of this design and the customer > >>don't want specific logic). The result of this division is connected to > >>a IBUFG too. > >>Now my problem is to connect and to use my last GCLK pin like a standard > >>input. > >>Is There a way to use a GCLK pin without passing through an IBUFG. If > >>yes, which constraint will do that for me. > >> > > > > <...> > > -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 47212
Me too. I think 7.1 also supports math_real for constants. We're still using 7.0.3 because of the funky things 7.1 did to our instantiated carry chains...we applied the patch for that, but it also sticks in an extra lut because of the syn_keep we have on the luts to prevent earlier versions of Synplify wrecking the carry chains. hamish@cloud.net.au wrote: > Ray Andraka <ray@andraka.com> wrote: > > I think synplicity does now too. My point is I'd like to see it in the LRM so > > that the code can be made portable between tools. > > 7.2.0 seems to accept reals for constants; previous versions didn't, > unfortunately. > > I'm constantly amazed at how badly these tools support data types that > are only used to generate constants. > > Hamish > -- > Hamish Moffatt VK3SB <hamish@debian.org> <hamish@cloud.net.au> -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 47213
I'm not sure about the first statement, since if I had a chip with custom DSP, custom DPLL, and say, 50-100K gates, and 1 clock, and I was going to sell 10 Million of them, and it only used 1 clock, why would I not make an ASIC? We shouldn't need to break out the calculator to make my point. Also, the design should take into place the transition between multiple domains, and force synchronization between them. I suppose if you couldn't use FIFO bucket for data transfer between two domains, then you'd have big timing concerns, because depending upon a race condition not occuring would be just plain old hokey. BB ====================================================== > Anything complicated enough to make an ASIC out of would tend to have > multiple clock domains, etc. It would tend to be synchronous within each > domain, but if "timing closure is nearly automatic" were true then why > do we at Tality get paid piles of dosh to do the layout for so many > designs? >Article: 47214
Theron Hicks wrote: > Hi, > I am looking at the specifications for the spartan2e in a dash 7 speed > grade part. The spec sheet says maximum toggle frequency (for export > control) is 400MHz. What is the intent of the "for export control" > statement. Is that a US government requirement Yes. There used to be (are?) restrictions on exporting very sophisticated ICs to certain countries. And the government agency in charge used the max frequency as a criterion. Since FPGA speed is a bit difficult to classify, we added this parameter. Peter AlfkeArticle: 47215
Kamal Patel <kamal.patel@xilinx.com> wrote: > This is correct. Windows XP and Windows 2000 will > be the only supported Microsoft operating systems > for 5.1i, although 5.1i should still install and > run fine on Windows NT. Is this the simple solution to some of the bugs in 4.2i ? :-| I've found 4.2i to be significantly more stable on Windows 2000 (SP2) than Windows NT (SP6a). Hamish -- Hamish Moffatt VK3SB <hamish@debian.org> <hamish@cloud.net.au>Article: 47216
I have been having a problem for a long time now when I try to functionally simulate a VHDL design of a Xilinx Virtex II FPGA which uses several asynchronous FIFOs generated via the Coregen tool. I use Modelsim 5.6d but had similar problems with 5.5 as well. When the simulation tries to read all the data from one of the asynchronous FIFOs, the empty flag is not asserted and it is not possible to read the last word in the FIFO. This problem does not show up in the post-routed gate-level simulation. We sort of got around the problem by compiling the verilog Coregen async FIFOs with the +delay_mode_zero (an option not available in VHDL) and the empty flag works, although other parts of the FIFO look broken. The write clock is 125 MHz and the read clock is 33 MHz (again, this is a functional simulation with no delays). Has anyone seen a problem like this or can they suggest some obscure mode of Modelsim that might need to be invoked to cure the problem? I have submitted a web case to Xilinx on this issue, but so far they can't reproduce my problem even though I've seen it on several different installations of Modelsim and the Xilinx libraries.Article: 47217
tony wrote: > Duane: > > Thanks for your reply. > > >>In any case, wherever you have the Xilinx software installed, there >>should be a library xilinx\vhdl\src\unisims that contains the correct >>libraries to use for your simultation. > > > I checked and the libraries are there. But they are in a VHDL directory > and I am trying to use verilog. Similarly, there is a xilinx\verilog\src\unisims directory. I don't use verilog, so I don't know how to use them. -- My real email is akamail.com@dclark (or something like that).Article: 47218
I've been working with Xilinx's PCIX core (5.0) on a Virtex2 for a few weeks now. Maybe some of you have seen the following issues and could post a comment. The biggest issue is that only one in twenty of my builds are even consistently detected by the BIOS. I only use the *_x stuff (with the 100MHz settings). My motherboard (Tyan S2720) only sees the board in the 66MHz slots when I put the bus speed on Auto on the BIOS. It only sees it in the high speed slots when I put the bus speed at 100MHz in the BIOS. Several things I had to do to get the thing to ever be seen: There is a lose plug in the userapp.vhd (FPGA_RTR) that I had to tie down to keep it from showing up on a clock pin or other random places. Second, I had to make all my select/when statements in the userapp.vhd anally mutually exclusive. Third, I had to use standard OBUFs on the pads running to programming pins on other chips and OFDDRTCPE/IFDDRCPE on the data pins to the other chips. If I change the standard OBUFs to OFDX to be consistent with the DDRs, the whole thing fails to be recognized by the BIOS. Same problem if I change my DDRs to IOBUFs. So here's the questions: What the heck do the buffer types in the userapp.vhd have to do with the PCI controller being recognized by the BIOS? Do I need to put any IOBDELAY on the PCI registers or the buffers that carry the data from the registers to the other chips? Or is there some other net/inst attribute I should use? When I include the userapp.vhd data pads in the timing spec, it only routes at 60MHz. Is this causing a problem? I don't seem to have trouble with the data at 100MHz, but of course I don't send data through every clock cycle either. Is there something I need to do with data ins/outs in the controller on reboot that I'm not doing and, hence, causing some good builds and some bad builds? I understand there is a bit of randomness in the PAR, but there's definately something wrong with having to run that until I find a working build. Does OFDX with a tied high CE work significantly different from OBUF? Sorry for the long post. Maybe this can help somebody else.Article: 47219
Hi John, In addition to the datasheet that Ray pointed you to, you may also want to check out: http://support.xilinx.com/xlnx/xil_ans_display.jsp?iLanguageID=1&iCountryID=1&getPagePath=1067 Solution 2 has the information that you'll need to check to see if the parts themselves are the same silicon revision. As for firmware, these are completely programmable chips so they could have any design in the world in them. There isn't an easy way to tell whether or not they're the same design either. For that, you'll have to check with the manufacturer and/or designer of the boards. Best regards, Ryan John wrote: > I'm having difficulty finding information on XCV600's on the Xilinx site. Part of the problem is I know nothing about this type of part, including terminology. I have two boards with this part, but do not know how to check if they are the same revision, whether the firmware (?) is the same, or where to get the firmware, if it is compatible. Any constructive suggestions would be appreciated.Article: 47220
Hi Mikhail, please read carefully through the pin-file you get from Quartus. It says: -- GND+ : Unused input. This pin should be connected to GND. It may also -- be connected to a valid signal on the board (low, high, or -- toggling) if that signal is required for a different revision -- of the design. -- GND* : Unused I/O pin. This pin can either be left unconnected or -- connected to GND. Connecting this pin to GND will improve the -- device's immunity to noise. This should answer the question on "what it means" To resever the FAST pins as input's (beeing tri-stated) go into the compiler settings dialog -> chips&devices -> assign pins. Select the pin and use a name you like (dummy1...3) and select "reseve pin even if it does not exist in the design" HTH Christoph -- Mikhail Matusov schrieb: > > Thanks for your quick reply, Rene! > > However, pins marked with GND+ are not configuration pins in my case. They > are either dedicated Fast I/O or clock related pins. The one that bothers me > the most is actually a Fast I/O. I have a customer who has this pin > connected to something on his board and the board doesn't work. So, I need > to know whether I am driving it by any chance... The design is proven to > work on our own card and on other customer's card, so I know it should > work... > > /Mikhail > > "Rene Tschaggelar" <tschaggelar@dplanet.ch> wrote in message > news:3D8A1D3E.9010404@dplanet.ch... > > I had a discussion about exactly that subject today. > > From MaxPlus2 : > > ^ means dedicated pin > > + means reserved configuration pin, which is tristated > > during user mode > > * means reserved configuration pin, which drives out in > > user mode > > > > I guess it means you have to connect them to GND. > > > > Rene > > -- > > Ing.Buero R.Tschaggelar - http://www.ibrtses.com > > & commercial newsgroups - http://www.talkto.net > > > > Mikhail Matusov wrote: > > > Hi all, > > > > > > I am looking into a report file generated by Quartus for an Apex design > and > > > I see several unused in the design pins to be called GND+ and GND*. I > asked > > > the tools to tri-state all the unused pins and it worked for 95% of them > but > > > there are these 7 pins assigned to GND+ and 2 to GND*. Three of them are > > > dedicated Fast pins, the forth Fast pin is actually used in the design. > > > Others are dedicated clock pins such as CLK4p, CLKLK_FB1p, etc. So, does > > > anyone know why this is happening and what the hell GND+ and GND* is > > > supposed to mean? I can't find this explained anywhere... > >Article: 47221
Hello, If you have a large number of simultaneously switching IOs it is a good practice to connect the unused IO pins to GND, it helps reduce "ground bounce"; Still, that should be left to the designer to decide not the par software....right?? jakab Rene Tschaggelar <tschaggelar@dplanet.ch> wrote in message news:3D8A1D3E.9010404@dplanet.ch... > I had a discussion about exactly that subject today. > From MaxPlus2 : > ^ means dedicated pin > + means reserved configuration pin, which is tristated > during user mode > * means reserved configuration pin, which drives out in > user mode > > I guess it means you have to connect them to GND. > > Rene > -- > Ing.Buero R.Tschaggelar - http://www.ibrtses.com > & commercial newsgroups - http://www.talkto.net > > Mikhail Matusov wrote: > > Hi all, > > > > I am looking into a report file generated by Quartus for an Apex design and > > I see several unused in the design pins to be called GND+ and GND*. I asked > > the tools to tri-state all the unused pins and it worked for 95% of them but > > there are these 7 pins assigned to GND+ and 2 to GND*. Three of them are > > dedicated Fast pins, the forth Fast pin is actually used in the design. > > Others are dedicated clock pins such as CLK4p, CLKLK_FB1p, etc. So, does > > anyone know why this is happening and what the hell GND+ and GND* is > > supposed to mean? I can't find this explained anywhere... >Article: 47222
Can you give more informaiton of the package type and how all the "not used" pins are connected. Looking into your mapping file, and compare the old one with the new one can help, even though it may not be the answer, but it is worth looking into. Would like to know when you find the answer, /Farhad pierrotlafouine@hotmail.com (Pierre Lafrance) wrote: >Hi all >I respined a product, changing the old Xilinx XV-300 with a XCV-600E. >Of course, I had to change voltage regulator from 2.5 to 1.8 volts, >and few 5 volts CPLD to 3.3v. > >The problem is : the XCV-600E overheat, and 1 of the prototype just >died. >I tried to find any hardware signal that would exceed voltage but >couldn't. Hardware seems to be just fine. I suspect the chips itself >to overheat. I just put a heatseak temporarly, but would like to >solve the problem if I can. > >Simulation with XPower estimate the chips temperature to be 50C, but I >measure up to 65C. > >The disign use 75% of BRAM, and 75% of FF of the XCV-600E. >Clock is 82MHz. > >Anybody experienced overheat with 600E ? > >Cheers ! > >PierreArticle: 47223
Alan Raphael wrote: > We sort of got around the problem by compiling > the verilog Coregen async FIFOs with the +delay_mode_zero (an option not > available in VHDL) and the empty flag works, although other parts of the > FIFO look broken. The write clock is 125 MHz and the read clock is 33 > MHz (again, this is a functional simulation with no delays). Sounds more like a logic race than a modelsim problem. Consider synchronizing the 33 Mhz controls to 125 Mhz and use a synchronous fifo. -- Mike TreselerArticle: 47224
rickman wrote: > > Speaking of MS, anyone know what little tricks they are using to get > companies to switch from 98, ME, NT and 2000 to XP? Bike chains ? ;-) > I just can't belive > they are going to sit by and let everyone keep using the old OS with > licences that can't be well enforced. > They are probalbly sitting there, while we walk to linux ;-) As soon, as xilinx ports all the stuff as native applications ... cheers
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z