Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
The delay should be fine. I am clocking the ADC with the FPGA, so the FPGA will be reading the current data at the output of the ADC at the rising clock edge. There are then 32ns for the clock to get to the ADC, then for the ADC outputs to get to the FPGA inputs before the next rising clock edge. adrian > Current is not the issue. At a guaraneted max input current of 10 microamps, you could drive a fan-out of twothousand! > Delay is the issue. How much capacitance are you driving? > For a very rough approximation assume each input to be 10 pF, double or triple that for the pc-board traces, and assume a 20 Ohm output impedance. > At seven loads times 40 pF, you get a time constant of 5.6 ns ( very pessimistically). This ignores signal integrity issues like reflections etc. > Can you live with a 5 to 10 ns delay? >Article: 43451
This is probably my ignorance showing here, but if you had a highly stable input clock with no jitter, then would not the DLL output be just as stable? Adrian > Austin, > > Thanks for your comments. The DAC part is indeed to be used in a 3G > type application and therefore clock stability is very important. > > > Bert, > > > > True. DLLs, DCMs have jitter. They do wonderful things, but at a cost. For sampling clocks, or D/A clocks, always recongnize that there will be jitter, and in the most demanding applications (3G base stations), one may have to use the best > > possible LVPECL clock, differentially distributed, directly to the D/As and A/Ds (see the Analog Devices Website, and read their app notes on this subject, they are excellent in this regard). > > > > In fact, not even a PLL may be used for these kinds of situations, especially not one integrated into an FPGA. > > > > Once the data has been sampled by the high quality clock, it can then be processed in the FPGA, and then output and retimed by the high quality clock into the D/As. DCMs (or DLLS) are then used internal to the FPGA to deskew the clocks, and > > ease the IO timing. > > > > In other areas, where jitter is less of an issue (such as the audio DAC posting on this board), DCMs (DLLs) are not going to be an issue, and clock synthesis becomes the problem. > > > > Austin > > > > Bert Wibble wrote: > > > > > Adrian <> wrote in message news:<ee76d20.4@WebX.sUN8CHnE>... > > > > Are you going to drive the DAC with a clock signal from your DLL? You can just use different phase shifted signals until it works and you meet your setup requirements. I would guess either a 180 degree or 270 degree phase shift would work. > > > > > > > > adrian > > > > > > No plans to clock the DAC from the FPGA DLL due to the very low jitter > > > requirements on the DAC clock. DLLs are just too noisy for this sort > > > of thing. Unfortunately. > > > > > > BertArticle: 43452
Austin Lesea wrote: > three hands .... I place the tweezers in my teeth to flip the SM part off if I > soldered the wrong one on there the first time), and some small diameter > solder. > > It is just a technical skill, not unlike playing a musical instrument. > Practice...... > One thing missing from Austin's list: Solder braid (aka solder wick) for getting rid of those little solder bridges between IC legs or between neighbouring pads. One odd thing I've found is that some people swear by the soldering iron tip that's cylindrical with a little flat bit at the end on one side whereas others, like me, prefer the very narrow point ones even though they tend to burn out more quickly.Article: 43453
Q-Tips, Are good to move solder out of the way, too. Solder wick is good. The vacuum solder pull toys are not so nice, they tend to pull the pads right off the pcb if you have gotten things too hot. Austin Rick Filipkiewicz wrote: > Austin Lesea wrote: > > > three hands .... I place the tweezers in my teeth to flip the SM part off if I > > soldered the wrong one on there the first time), and some small diameter > > solder. > > > > It is just a technical skill, not unlike playing a musical instrument. > > Practice...... > > > > One thing missing from Austin's list: Solder braid (aka solder wick) for getting > rid of those little solder bridges between IC legs or between neighbouring pads. > > One odd thing I've found is that some people swear by the soldering iron tip > that's cylindrical with a little flat bit at the end on one side whereas others, > like me, prefer the very narrow point ones even though they tend to burn out more > quickly.Article: 43454
> Finally, I don't think you > should ever use tri-stated on-chip buses, they are here only to > interface with the outside world... > > regards, > juza Hi Juza, So what if they are primarily used to interface to the outside world? That is what a large number of designs do! The internal tri-state busses in the Xilinx had some major advantages to other methods. They did not take up logic resources...and weren't counted in the "gate count". They were (far) faster than using a mux, if you had, say, more than 4 connections. They also reduced the routing resources substantially. All those could make or break a design! AustinArticle: 43455
Noddy, Anything you do, introduces jitter. Even a single inverter introduces jitter. Now imagine thousands of inverters (aka a delay line). Now imagine moving taps to the thousands of inverters. This is where jitter comes from. Jitter is also present whenever you have a mis match in impedance, a discontinuity, cross talk coupling, ground bounce, or any active switching or amplifying element operating above absolute zero (ie noise in voltage translates into noise in time, or jitter). Oh, and there is no such thing as a clock with no jitter, just a very good clock with acceptably small jitter. Austin Noddy wrote: > This is probably my ignorance showing here, but if you had a highly stable > input clock with no jitter, then would not the DLL output be just as stable? > > Adrian > > > Austin, > > > > Thanks for your comments. The DAC part is indeed to be used in a 3G > > type application and therefore clock stability is very important. > > > > > Bert, > > > > > > True. DLLs, DCMs have jitter. They do wonderful things, but at a cost. > For sampling clocks, or D/A clocks, always recongnize that there will be > jitter, and in the most demanding applications (3G base stations), one may > have to use the best > > > possible LVPECL clock, differentially distributed, directly to the D/As > and A/Ds (see the Analog Devices Website, and read their app notes on this > subject, they are excellent in this regard). > > > > > > In fact, not even a PLL may be used for these kinds of situations, > especially not one integrated into an FPGA. > > > > > > Once the data has been sampled by the high quality clock, it can then be > processed in the FPGA, and then output and retimed by the high quality clock > into the D/As. DCMs (or DLLS) are then used internal to the FPGA to deskew > the clocks, and > > > ease the IO timing. > > > > > > In other areas, where jitter is less of an issue (such as the audio DAC > posting on this board), DCMs (DLLs) are not going to be an issue, and clock > synthesis becomes the problem. > > > > > > Austin > > > > > > Bert Wibble wrote: > > > > > > > Adrian <> wrote in message news:<ee76d20.4@WebX.sUN8CHnE>... > > > > > Are you going to drive the DAC with a clock signal from your DLL? > You can just use different phase shifted signals until it works and you meet > your setup requirements. I would guess either a 180 degree or 270 degree > phase shift would work. > > > > > > > > > > adrian > > > > > > > > No plans to clock the DAC from the FPGA DLL due to the very low jitter > > > > requirements on the DAC clock. DLLs are just too noisy for this sort > > > > of thing. Unfortunately. > > > > > > > > BertArticle: 43456
"Noddy" <g9731642@campus.ru.ac.za> schrieb im Newsbeitrag news:1022011428.817330@turtle.ru.ac.za... > This is probably my ignorance showing here, but if you had a highly stable > input clock with no jitter, then would not the DLL output be just as stable? Yes, it would be long-term stable, but heavyly polluted with high-speed (short-term) phase noise, aka jitter, generated by the DLL while tracking the input signal. -- MfG FalkArticle: 43457
Arthur wrote: >... > I believe what is going on is that the > Jedec tool is case sensitive, and the proper case is lost during the > creation of the TMV file - and as a result, the Jedec tool can't > figure out that DATAIN == datain and therefore you get XXXX's in your > Jedec file. After I changed all I/O names to lower case it worked, the vectors are now written to the .jed. > This functional test problem doesn't have anything to do with the BSDL > errors you are getting. You could have the problem as described in > Solution record 12737 Updating to iMPACT 4.2 solved this problem, now I got much more errors: In my vectors during functional test ;-). I think its the pin feedback on some signals, otherwise my circuit would not work at all. Thanks a lot for your help, Joerg.Article: 43458
"Softley, C." <softley@natlab.research.philips.com> writes: > For those not listening on c.a.fpga, tristates on-chip in ASICs > remain very useful for intermodule comms: tristate busses are narrower than > they would otherwise have to be, thereby saving a bundle of interconnect, > area (=cost) and likely power and EMC into the bargain. Speed may not > be as high as separating out the busses, due largely to delays inside > the buffer circuits themselves, but delays in this sort of situation are > increasingly dominated by signal propagagation down the distributed R-L-C > of the wires anyway. While I agree on your basic assertion that tri-state is slower than dedicated busses, I disagree on your comments about the speed differences. As I read your comments, you are claiming that tri-state busses are only marginally slower than dedicated busses. I beg to disagree! As with busses on a backplane (say, VME or PCI) the speed and cycle time of a tri-state bus is basically determined by the distance that the signal must cover. On directed busses, you can either waveform pipeline or even physically pipeline the bus, to increase the performance of the bus (see for example SCI [IEEE Std1596-1992], which uses a 16bit parallel bus, running at 500MHz). Also, your friendly backend'er (the person doing floorplanning, place & route) may tell you that tri-state busses are out of the question due to physical issues. Finally, even if you CAN use a tri-state bus with your current library from vendor A, you might not use it in vendor B's design flow. Why box yourself in??? Sure, if you don't have significant bandwidth requirements you can get away with murder... Regards, Kai -- Kai Harrekilde-Petersen <khp@vitesse.com> Opinions are mine...Article: 43459
Hi, I have been very unhappy that since ISE 4.x, XST (Xilinx Synthesis Technology) only generates an encrypted NGC netlist, and not an EDIF netlist which can be read and edited if needed. Because of that, I even had to resort back to WebPACK ISE 3.3's XST which still generated an EDIF netlist. (See "Duplicating IOB FFs Without I/O Pads Being Inserted in XST" if you are interested.) I needed to edit an EDIF file generated because XST won't duplicate FFs for IOBs if I/O pads weren't added, but I cannot add I/O pads because I am doing an IP core. (The backend logic that instantiates the IP core will also instantiate I/O pads.) However, yesterday I accidentally found a way to get the latest XST (Ver. E.35) to generate an EDIF netlist despite the fact that XST's manual says that it doesn't support it. The trick I found was to change (Your Project).xst's -ofmt option ((Your Project).xst is located under your project's directory.). -ofmt NGC Should be changed to, -ofmt EDIF However, Xilinx is really mean (unreasonable), and if you will try to synthesize a design from ISE's GUI, it will change the -ofmt option back to NGC. To avoid that, you will have to run XST from a batch file (.BAT file in DOS) from your project's directory. In case someone is not too familiar to run XST from a command line, the batch file will look like this, xst -ifn (Your Project).xst -ofn (Your Project).syr This way, ISE cannot manipulate the .xst file, so XST will generate an EDIF netlist. If someone is interested further about this -ofmt option, you can change this to something invalid like NGO or I_hate_NGC, and XST should display a message that says only NGC, EDIF, or NGO_EDIF are the supported output formats. The way I found this trick was when I was reading XST's manual, it mentioned that the only valid -ofmt option was NGC (-ofmt NGC). Before ISE 4.x, XST used to generate an EDIF file, so this field used to be EDIF (-ofmt EDIF). Going back to XST's manual, it says the only valid -ofmt option is NGC, but when I saw the word 'only', I suddenly thought, "Why don't I change this to EDIF, and see what happens?" So, I changed the -ofmt option to EDIF, ran it from a batch file which I normally don't do, and it worked . . . So, after all, the latest XST actually supported generating an EDIF netlist, therefore, I really have to wonder why Xilinx doesn't want people to know that. There are several theories why XST now only 'officially' generates an encrypted NGC netlist, and some people speculate that Xilinx doesn't want people to get a free synthesis tool that generates an EDIF netlist which can, in theory, converted to other vendors' library primitives, wants to protect IP cores although what XST generates is not necessarily IP cores, etc. My latest conspiracy theory will be that perhaps some third-party synthesis tool vendors don't necessarily appreciate the existence of XST. If XST were so good, and generated an EDIF netlist, I am sure some users won't pay another several thousands of dollars for a third-party synthesis tool considering that XST comes with ISE free of charge. Therefore, to make third-party synthesis tool vendors happy, Xilinx 'crippled' XST by not 'normally' allowing it to generate an EDIF netlist, so that when users ask Xilinx how to get XST generate an EDIF netlist, Xilinx can say, "Use a third-party synthesis tool to get an EDIF netlist." Another posting from a Xilinx employee several months ago said that eliminating an EDIF netlist generation capability is "a huge advantage that XST now has.", but as far as when I tried synthesizing a small design with an NGC output option and an EDIF output option, I didn't see a difference in logic usage. http://groups.google.com/groups?hl=en&lr=&selm=3C2CDBDA.5060102%40xilinx.com I don't quite understand what this "huge advantage" really is, but perhaps it is an opportunity for third-party synthesis tool vendors to make more money. Another reasonable reason why XST actually still generated an EDIF netlist might be that Xilinx needs this capability for debugging purposes of the synthesis tool, although I am sure Xilinx has an NGC2EDIF conversion tool in-house for development purposes. I will like to hear Xilinx's response regarding my conspiracy theories, and why Xilinx won't let users know that XST Ver. E.xx can actually generating an EDIF netlist. Kevin Brace (In general, don't respond to me directly, and respond within the newsgroup.)Article: 43460
> Nice work Kevin. esp. the new conspiracy reason. Question: For which version of XST will EDIF well & truely disappear ? Faites vos jeux mesdames et messieurs, dans quelques relaches rien ne vas plus.Article: 43461
OK then. All of you have supposed that I know very little about FPGA. It is not so as you think, but I must admit that I am new to FPGA. But a part of my writings was a question: if it is not so, why it cant be done? Are the properties of silicon arrays responbible for this? What technologies can be used to make it work? I am sure that all of you would use a fully reconfigurable FPGA as soon as it available. So I think these questions are worth answering. The overall target of questions, answers and ideas in this message thread should not be how it cant be done but how it can be done. More precisely the target is still this one: we want to make (or discuss) a reconfigurable device that will allow us to make almost any interconnection between "branches" and "operation units and memory" in any acceptable form. The two quoted strings stand for basic building blocks of an algorithm. I have not found any more usefull group than this one, so I have decided to question people involved in this one (just to explain why I addressed these questions to you). Further questions and suggestions: 1. Should we not use something different from silicon? 2. Are there any attempts to make a prototype of such a device? 3. We dont need to start with properties like fast and small, but rather to achieve propper functioning. 4. Is there something that can be reconfigured from memory to a branch and vice versa? I look forward to your suggestions and ideas....Article: 43462
"Arthur" <darrien42@yahoo.com> wrote in message news:d264e46.0205210804.caa6769@posting.google.com... > So that would be this address : > http://support.xilinx.com/xlnx/xil_prodcat_landingpage.jsp?title=ISE+WebPack > > -Then click on 'Download ISE WebPACK' (or register first if you don't > have a Xilinx login) > -Click on the CPLD link > -Change the version to 3.3WP8.1 > -Click Download button > -Download the XPLA Programmer module > > As Jim said in his earlier post - the XCR5xxx CPLD families were > obsoleted by Xilinx 1 to 2 years ago, and you need to use the older > programming tool in order to program these devices. > > For information on the obsoleted CoolRunner devices, go here: > http://www.xilinx.com/xlnx/xil_prodcat_product.jsp?title=coolpld_page > > Regards, > Arthur Actually I think it is worse than that I've tried to use Impact that came with the most recent Webpack to program some XCR3032XL's (which are current according to Xilinx's website) but no matter what i tried it wouldn't work. In the end I had to go back downloading all the Programmer modules until I got to the 3.3WP8.1 (XPLA Programmer 4.14) which works like a charm. I know all the programmer hardware works ok, because I plug it into a XC95144XL and Impact detects and programs that no problem. Tim SimpsonArticle: 43463
Jan Ziak wrote: > <snip> > Further questions and suggestions: > 1. Should we not use something different from silicon? A strange question. The material is of little consequence, but if you propose anything other than silicon, it had better have some significant edge to pay for the 'hard yards' of getting it to work. > 2. Are there any attempts to make a prototype of such a device? What device ? There have been some notes on 'random seeding' FPGAs to see if some intelligence can 'evolve' ? > 3. We dont need to start with properties like fast and small, but > rather to achieve propper functioning. but 'fast' and 'small' had better be close behind, if it is not to be a complete waste of time. > 4. Is there something that can be reconfigured from memory to a branch > and vice versa? You need to clarify this, and study FPGA cells closer. What is branch ? A connection, a state-transistion decision, subroutine call... ? And no mention at all of the tools, software and training needed to get anything to work ...... -jgArticle: 43464
Jan, the people in this newsgroup together know practically everything that is worth knowing about FPGA, and about using them as logic devices, including the software that configures, simulates and tests them. And many of us are interested and eager to help. But you have to help us understand your general questions. You have to somehow express them in hardware or logic terms, otherwise we do not communicate, or we miscommunicate. As I said before, FPGAs can perform any imaginable logic or arithmetic function, with certain size and speed constraints. Peter Alfke Jan Ziak wrote: > OK then. All of you have supposed that I know very little about FPGA. > It is not so as you think, but I must admit that I am new to FPGA. > > But a part of my writings was a question: if it is not so, why it cant > be done? Are the properties of silicon arrays responbible for this? > What technologies can be used to make it work? I am sure that all of > you would use a fully reconfigurable FPGA as soon as it available. So > I think these questions are worth answering. > > The overall target of questions, answers and ideas in this message > thread should not be how it cant be done but how it can be done. More > precisely the target is still this one: we want to make (or discuss) a > reconfigurable device that will allow us to make almost any > interconnection between "branches" and "operation units and memory" in > any acceptable form. The two quoted strings stand for basic building > blocks of an algorithm. > > I have not found any more usefull group than this one, so I have > decided to question people involved in this one (just to explain why I > addressed these questions to you). > > Further questions and suggestions: > 1. Should we not use something different from silicon? > 2. Are there any attempts to make a prototype of such a device? > 3. We dont need to start with properties like fast and small, but > rather to achieve propper functioning. > 4. Is there something that can be reconfigured from memory to a branch > and vice versa? > > I look forward to your suggestions and ideas....Article: 43465
Rick Filipkiewicz wrote: > > > > > Nice work Kevin. esp. the new conspiracy reason. > > Question: For which version of XST will EDIF well & truely disappear ? Faites > vos jeux mesdames et messieurs, dans quelques relaches rien ne vas plus. Is this question intended for Xilinx, or me? Hopefully, ISE 5.x won't kill XST's EDIF netlist generation capability. Kevin Brace (In general, don't respond to me directly, and respond within the newsgroup.)Article: 43466
Wim, if you really want to have fast download speed and long cables I recommend that you do not just extend the JTAG cable itself. Rather get the Altera MasterBlaster and a long USB cable, a setting that worked great for me with a long (~5m USB cable). My estimations of download speeds are that the JTAG TCK is approx 1MHz... Endric -- Bridges2Silicon, Inc. Endric Schubert, PhD 471 E. Evelyn Ave. Sunnyvale, CA 94086 http://www.bridges2silicon.comArticle: 43467
Kevin Brace wrote: > <snip> > The trick I found was to change (Your Project).xst's -ofmt > option ((Your Project).xst is located under your project's directory.). > > -ofmt NGC > > Should be changed to, > > -ofmt EDIF > > <snip> > So, after all, the latest XST actually supported generating an > EDIF netlist, therefore, I really have to wonder why Xilinx doesn't want > people to know that. > There are several theories why XST now only 'officially' generates an > encrypted NGC netlist, and some people speculate that Xilinx doesn't > want people to get a free synthesis tool that generates an EDIF netlist > which can, in theory, converted to other vendors' library primitives, > wants to protect IP cores although what XST generates is not necessarily > IP cores, etc. > My latest conspiracy theory will be that perhaps some third-party > synthesis tool vendors don't necessarily appreciate the existence of > XST. It does sound more like an idea more off a golf-course than from an engineering lab :-) .NGC = Near Golf-course Concession ? > If XST were so good, and generated an EDIF netlist, I am sure some users > won't pay another several thousands of dollars for a third-party > synthesis tool considering that XST comes with ISE free of charge. > Therefore, to make third-party synthesis tool vendors happy, Xilinx > 'crippled' XST by not 'normally' allowing it to generate an EDIF > netlist, so that when users ask Xilinx how to get XST generate an EDIF > netlist, Xilinx can say, "Use a third-party synthesis tool to get an > EDIF netlist." Companies that tried encrypting data files in the past, got nothing but aggravation: both from customers, and from a much longer bug report/fix loop. A customer that cannot workaround a problem is somewhat more agitated than one that can :-) A deliberate crippling also sounds like a 'feeding trough' for lawyers, should such customers decide Xilinx has materially affected their business. > Another posting from a Xilinx employee several months ago said > that eliminating an EDIF netlist generation capability is "a huge > advantage that XST now has.", but as far as when I tried synthesizing a > small design with an NGC output option and an EDIF output option, I > didn't see a difference in logic usage. > > http://groups.google.com/groups?hl=en&lr=&selm=3C2CDBDA.5060102%40xilinx.com > > I don't quite understand what this "huge advantage" really is, but > perhaps it is an opportunity for third-party synthesis tool vendors to > make more money. > Another reasonable reason why XST actually still generated an > EDIF netlist might be that Xilinx needs this capability for debugging > purposes of the synthesis tool, although I am sure Xilinx has an > NGC2EDIF conversion tool in-house for development purposes. Anyone who decides to encrypt files that allowed users to Fix sw issues, must have supreme confidence in their flows, and zero defects. Xilinx should be saluted for having reached this important milestone. -jgArticle: 43468
Kevin Brace wrote: > Rick Filipkiewicz wrote: > > > > > > > > > Nice work Kevin. esp. the new conspiracy reason. > > > > Question: For which version of XST will EDIF well & truely disappear ? Faites > > vos jeux mesdames et messieurs, dans quelques relaches rien ne vas plus. > > Is this question intended for Xilinx, or me? Neither really. Its just that if (a) your conspiracy theory is correct and (b) Xilinx realise the secret is out then Xilinx will have to remove -ofmt EDIF to keep the synth tool vendors happy. But maybe I'm just having a more-cynical-than-usual sort of day.Article: 43469
Addressing the front part of your recent post inline below: Jan Ziak wrote: > OK then. All of you have supposed that I know very little about FPGA. > It is not so as you think, but I must admit that I am new to FPGA. Since hardware design provides operation based on logic and register elements, your talk about algoritms in an abstract sense gave me the idea that you're not familiar with what it takes to put an algorithm into dedicated silicon. My apologies if I confused the abstraction with limited knowledge of the target architecture. > But a part of my writings was a question: if it is not so, why it cant > be done? Are the properties of silicon arrays responbible for this? > What technologies can be used to make it work? I am sure that all of > you would use a fully reconfigurable FPGA as soon as it available. So > I think these questions are worth answering. The devices we use now are "fully reconfigurable" in the sense that a typeset machine provides "fully reconfigurable text output." We can design a device to do what we want. We can simulate and verify the functionality. We can do massive amounts of work every clock cycle. I don't feel the need to extend well beyond this paridigm since I cannot understand the benefit. How would the design requirements translate into algorithms? Right now it's through design engineers who know the target device architectures and effective HDL (hardware description language) coding techniques in much the same way as a software engineer knows the target processor architecture and effective HLL (higher level language) coding techniques. I'd like to get an idea on how to improve either of these design flows from a design requirement to a realized algorithm in an FPGA or a processor. > The overall target of questions, answers and ideas in this message > thread should not be how it cant be done but how it can be done. More > precisely the target is still this one: we want to make (or discuss) a > reconfigurable device that will allow us to make almost any > interconnection between "branches" and "operation units and memory" in > any acceptable form. The two quoted strings stand for basic building > blocks of an algorithm. Help me understand, please... Is a "branch" not a selection between two pieces of an algorithm? In my interpretation of a sequential code algorithm, I would design my hardware not to send data to a different piece of the silicon, but instead change how the data is modified through a common path based on the branch condition; sometimes this simplification won't hold and some silicon must sit idle while an alternative path is used but it's often much less efficient. The "operation units" sound like adders, shift right operators, logical ANDs and other ALU style items. We deal with logical operations which are a very different flavor in the operand and result widths as well as implementation efficiencies. What operations should be designed in the reconfigurable architecture to be most effective for universal algorithm appeal? The "memory" is varied in the ability to manipulate it in convenient and efficient forms now more than ever. We have logic tightly coupled to registers - memory elements - and we have larger memory arrays that can deal with kbits of storage. We have what we need. Please tell me what we don't yet know we need.Article: 43470
This is a multi-part message in MIME format. --------------78E41CBC53E6BB9F315860A5 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit I don't quite understand why you would want to read the GSR signal however there is a fairly easy way to do so that I have not heard anyone else suggest. All you need to do is drive the D input of any register in the chip with a constant one then read the output of that register. When GSR is activated, that output will be zero. When it is released, it will become a one one clock cycle later. Will that accomplish what you are looking for? -- Brian Scott Schlachter wrote: > Hi Stephanie, > Just happened across your posting. To drive/monitor GSR, probably best to use the > Startup block as what's already been suggested - although in that case you can only > monitor whatever you use to drive GSR, rather than directly monitoring GSR. > Although it is fairly cumbersome and not asynchronous, if you just want to poll the > status of GSR from time to time, and you also happen to have (or have the ability > of designing in) access to the JTAG port, you can read the output of the "Status > Register", one bit of which will report the GSR status (GSR_B). See the following > app notes: > http://www.xilinx.com/xapp/xapp188.pdf > http://www.xilinx.com/xapp/xapp151.pdf > Hope that helps, > -Scott S. > > Stephanie McBader wrote: > > > OK, that I understand.. But what I need is to *read* the GSR net - how can I do > > that? > > > > Still looking out for help :) > > > > - Cross posted to comp.arch.fpga > > > > Best Regards, > > > > Stephanie McBader > > Researcher/Design Engineer > > NeuriCam S.p.A > > Via S M Maddalena 12 > > 38100 Trento TN, Italy > > Tel: +39-0461-260552 > > Fax: +39-0461-260617 > > > > Allan Herriman wrote: > > > > > [snip] > > > > > You can drive GSR (to reset or set every flip flop in the chip) by > > > asserting the GSR input on the startup block at any time. > > > > > > > I earlier wrote: > > > > > >> Hi there, > > > >> > > > >> Apologies if this has come up before - I have searched both newsgroups > > > >> and the Xilinx support website but I have not found *exactly* what I'm > > > >> looking for. > > > >> > > > >> Is it possible to *read* the GSR signal of a Spartan-II FPGA? The > > > >> problem is that we have a system in which a controller must be > > > >> implemented inside the FPGA but which does not provide any form of reset > > > >> signal as an input to the FPGA logic. I have had a look at the > > > >> Spartan-II datasheet and on page 19 there is a waveform describing the > > > >> start-up sequence of the FPGA after programming. The GSR is evidently > > > >> active high and is negated shortly after DONE goes high. I would like to > > > >> be able to use the GSR value as an asynchronous reset input to the logic > > > >> implemented in the FPGA.. How do I do this? > > > >> > > > >> I have seen numerous answers pointing to the STARTUP block, but this > > > >> only has the GSR signal as an *input*! I need to read it somehow and I > > > >> do not know which source to use.. > > > >> > > > >> Oh, we're using VHDL by the way (just thought I'd mention it even though > > > >> this *is* comp.lang.vhdl) > > > >> > > > >> Your help would be much appreciated. > > > >> > > > >> Regards, > > > >> > > > >> Stephanie McBaderArticle: 43471
Rick Filipkiewicz wrote: > > > > Neither really. Its just that if > > (a) your conspiracy theory is correct > > and > > (b) Xilinx realise the secret is out > > then Xilinx will have to remove -ofmt EDIF to keep the synth tool vendors happy. > > But maybe I'm just having a more-cynical-than-usual sort of day. When I discovered that ISE 4.x's XST can actually generate an EDIF netlist, I thought of telling it only to people who have posted questions on how XST can generate an EDIF file a couple of months ago rather than posting the information here, so that Xilinx won't know that someone has figured out how to get ISE 4.x's XST to generate an EDIF file. But rather than keeping it a secret, I felt like it will be better if everyone who wants to know how to do it, so I posted it, although I thought if the secret is out, Xilinx might really try to get rid of EDIF netlist generation capability from XST. I think it will be a big mistake if Xilinx really does that. Kevin Brace (In general, don't respond to me directly, and respond within the newsgroup.)Article: 43472
Hello ! I have some problems with simulation under enviroment mentioned in the subject. In my VHDL/schematic mixed design I have some instances of OBUFE4, ROM32X1, and OSC4 from Xilinx libraries. Functional simulation works perfectly, design compiles without any warnings, simulator doesn't complain either. During synthesis I get the following warning from FPGA Express: Warning: Cannot link cell 'hdlc_proj/U2' to its reference design 'OBUFE4'. (FPGA-LINK-2) Should I worry about this ? How to avoid this message ? When I try to do post-synthesis simulation I get the following errors (on compilation of post-synthesis VHDL model): # Warning: ELAB1_0026: hdlc_proj.vhd : (577, 0): There is no default binding for component "OSC4".(No entity named "OSC4" was found). # Warning: ELAB1_0026: hdlc_proj.vhd : (582, 0): There is no default binding for component "OBUFE4".(No entity named "OBUFE4" was found). # Warning: ELAB1_0026: hdlc_proj.vhd : (586, 0): There is no default binding for component "ROM32X1".(No entity named "ROM32X1" was found). I've tried to resolve this issue by manually adding xc4000e.vhd and obufe4.edn files to my design, and compiling them into post-synthesis library. It seems to work, but I hate it, I'm almost sure there's easier way to do it. Also there are still some problems: - ROM32X1s are initiated to nulls, instead of their INIT values (I can see correct values in synthesised edf file). - OSC4's generics are all set to default value. Thus, it doesn't work at all, unless I set its SEL_500K generic to true manually. It's understood, as I simply compiled plain library files into my post-synthesis lib, with all defaults. What should I do to do correct post-synthesis simulation ? Any help appreciated :) Also I get the following warning during synthesis : A MUX_OP was not inferred for the case in routine lcd_seq line 61 in file 'J:/ahdl_projekty/uc2_2/compile/lcd_seq.vhd' because the ratio of MUX_OP data inputs to unique data inputs is 2, which exceeds the hdlin_mux_oversize_ratio. (HDL-393) Can someone explain ? -=Czaj-nick=-Article: 43473
Falk Brunner wrote: > we have a design with a XCS30XL, which is utilized to 96%. > Bad thing is, P&R takes about 30!!! minutes on a Athlon 500. The design uses > a lot of TBUFs and DP-RAMS. > Are there some tricks to speed up things? Some floorplanning? Timing > constraints? To find out what floorplanning might gain you, once you get a route completed, re-route the same design with a completely locked down floorplan. How to completely lock down a design: (ucf flow) Open the graphical floorplanner, read in your design, from the "floorplan" menu select. From the file menu select "ucf flow", then write constraints to a new file. With a text editor, merge the placement constraints into your ucf file. Rerun ngdbuild, map and par. -- Phil HaysArticle: 43474
On Mon, 20 May 2002 11:17:14 -0600, m0 wrote: > This already exists, do a search on google for 'hardware software split > language pc' exactly and you'll get your hit. Sorry boys. Uh... when I try that I get a bunch of shopping sites :P Anyway, even if the idea I proposed already has been invented, it hasn't been implemented and made Open Source software. If the invention hasn't been developed and put out there for the use of others, it might as well not have been invented. --Micah
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z