Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi, Thanks for the reply... Can you point me to any resources on the web, any sample code (verilog) would be great. Thanks -Prashanth "Antti Lukats" <antti@case2000.com> wrote in message news:clm1d9$33v$05$1@news.t-online.com... > > "Prashanth" <prashanth.thota@gmail.com> wrote in message > news:be62eac0.0410260832.7f92146b@posting.google.com... > > Hi Folks, > > > > Is it possible to send user commands through JTAG to the internal > > Logic in the FPGA ? More Specifically a Spartan 3 FPGA. > > > > Thanks, > > Prashanth > > defenetly! using the BSCAN primitive that "routes" to instructions USER1 and > USER2 to the FPGA fabric. > >Article: 75251
Hal Murray wrote: > > >An async circuit is designed to work over a temp range, but is self > >timed. So the board designer has to test over temperature to make sure > >the circuit speed will be fast enough. .. > > I'm missing something. Why test the board as compared to > read the worst case numbers off the data sheet and see if > they are fast enough? Because the data sheet won't tell you how fast your software will run. Trying to measure the speed of software is very difficult considering all the permutations of paths that it can take. In DSP work this become very critical since it often is very much real time. But DSP algorithms are often are less complex to analyze than control programs or other tasks that embedded micros are running. I have never seen anyone try to count clock cycles (or ns for async circuits) for each instruction and analyze a program of any complexity. The best they normally do is measure it in a simulator or on the bench. A sync circuit would have the advantage of always having the same timing regarless of temp, voltage and process. The async circuit will vary with those parameters and will need to be verified. Perhaps a prorating figure will be provided to say that if you meet timing with a 20% margin at 25C and worse case Vdd, then at 70C it will run ok with the worse process. But my understanding is that process variations can be even wider than 20%. I seem to recall a conversation with Xilinx suggesting that you provide 50% or more between max and min delays. Async circuits don't remove the issues of meeting a "clock" timing. They just push the problem to the system level when you have to meet real time requirements. > It's the same problems as checking setup/hold times. Just > turned inside out. > > Is the info not in the data sheet? What info would you like them to spec? -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 75252
Phil Short wrote: > > On Sat, 30 Oct 2004 12:16:40 -0400, rickman wrote: > > > > > Yes, both async and sync sequential circuits have a clock. In async the > > clock just passes between adjacent stages. You are calling it a > > handshake, but this is used as a clock on FFs somewhere. Else how do > > you trigger the FFs? > > > > The word clock implies a global clock, or at least a clock that goes > to every flip-flop (storage element) in a large section of the chip. The > handshake signals in async design are local, rather than global in nature > (with, among other things, the benefit of greatly reduced EMI). I never said "global" and I don't see why you would infer that when we were talking about the async circuits. > > Hmmmm... well this could go on all day. I still stand by my point that > > most of the claims of how async circuits are better don't hold water. > > They may be different, but not necessarily better. I'm not sure anyone > > has given a single way in which async circuits are *clearly* better. > > Take a look at the book "Asynchronous Circuit Design" written by Chris J. > Meyers and published by Wiley in 2001. Chapter 9 gives some examples that > clearly contradict your comments. One example (RAPPID at Intel) give a > simultaneous 3:1 improvement in speed, a 50% improvement in power, and a > much larger input voltage range over the synchronous design using the same > fab process, at the expense of 22% more chip area. Other examples showed > similar results. I don't have a copy of that book. Those sound like great results. But there are a lot of other variables and only a handful of examples don't prove the method. The Philips async 8051 (which is discontinued after only a couple of years) doesn't seem to have any special advantages. It is (was) not cheaper than sync chips, it was not lower power (2-5 mA at 4 MIPs) and the lack of predictable speed would be a major issue in my book. -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 75253
Phil Short wrote: > > On Sat, 30 Oct 2004 17:14:43 -0500, Hal Murray wrote: > > >>Take a look at the book "Asynchronous Circuit Design" written by Chris J. > >>Meyers and published by Wiley in 2001. Chapter 9 gives some examples that > >>clearly contradict your comments. One example (RAPPID at Intel) give a > >>simultaneous 3:1 improvement in speed, a 50% improvement in power, and a > >>much larger input voltage range over the synchronous design using the same > >>fab process, at the expense of 22% more chip area. Other examples showed > >>similar results. > > > > So why hasn't async technology grabbed a bigger chunk of the market? > > I would assume that async technology is being held back, in part, because > of the lack of widespread availability of design tools, designers familiar > with the techniques involved, and of good production test and > characterization tools, and so forth. A chicken and egg situation is how > I would put it. Or maybe the tools are not being developed because there are no clear advantages to async circuits? Please explain to me in simple terms where the speed, size and power advantages come from? I still have not seen it. -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 75254
If you have a few spare years it might be able to be explained. The problem is I don't believe there is anybody here who can explain it. Maybe its like RDRAM. In theory great.. in practise its moved so slow that advances in sync logic passed it by. The part I've read is "for the same function, async circuits draw less power, consume 20% more silicon, and run 15% faster." The key here is "for the same function". I've said before, that a piece of silicon with the same geometry and same number of transistors running at the same clock rate draws the same power. It doesn't really matter what its doing. So if you have 90% of the silicon working for a sync circuit, and 90% working for an async circuit, there is no saving. The saving is async doesn't run all the time, no clocks, no nothing. This is where async gets its gains. I'm sorry I don't have formulas or detailed numbers, but I don't usually design async logic :-) Simon. "rickman" <spamgoeshere4@yahoo.com> wrote in message news:41846C13.9E7D6027@yahoo.com... > Phil Short wrote: > > > > On Sat, 30 Oct 2004 17:14:43 -0500, Hal Murray wrote: > > > > >>Take a look at the book "Asynchronous Circuit Design" written by Chris J. > > >>Meyers and published by Wiley in 2001. Chapter 9 gives some examples that > > >>clearly contradict your comments. One example (RAPPID at Intel) give a > > >>simultaneous 3:1 improvement in speed, a 50% improvement in power, and a > > >>much larger input voltage range over the synchronous design using the same > > >>fab process, at the expense of 22% more chip area. Other examples showed > > >>similar results. > > > > > > So why hasn't async technology grabbed a bigger chunk of the market? > > > > I would assume that async technology is being held back, in part, because > > of the lack of widespread availability of design tools, designers familiar > > with the techniques involved, and of good production test and > > characterization tools, and so forth. A chicken and egg situation is how > > I would put it. > > Or maybe the tools are not being developed because there are no clear > advantages to async circuits? > > Please explain to me in simple terms where the speed, size and power > advantages come from? I still have not seen it. > > -- > > Rick "rickman" Collins > > rick.collins@XYarius.com > Ignore the reply address. To email me use the above address with the XY > removed. > > Arius - A Signal Processing Solutions Company > Specializing in DSP and FPGA design URL http://www.arius.com > 4 King Ave 301-682-7772 Voice > Frederick, MD 21701-3110 301-682-7666 FAXArticle: 75255
rickman <spamgoeshere4@yahoo.com> wrote in message news:<4183C855.1CF5E35E@yahoo.com>... > > In case (2), if the signal is comping from a hardware pin (uning an > > IBUF component), the value need not be GND (or even constant). > But you must actually connect it to an outside pin. All you have done > is name the net on the other side of the IBUF or OBUF, you have not > connected it to the outside world. > That is what the port definition is > for. Look at it from the perspective of the VHDL compiler. Your signal > button has no driver specifed for it. I have specified that BUTTON is connected to P22, using the "LOC" attribute, and I hoped that XST would infer that P22 would act as a driver for the BUTTON net. The IPAD documentation (Libraries Guide) says: "An IPAD can be inferred by NGDBUILD if one is missing on an IBUF or IBUFG input". I would have expected this implicit IPAD to be flagged as a driver for the net. Still, I will try to instantiate an IPAD by hand, perhaps it helps. Alternatively, I would be perfectly happy to attach another attribute to the BUTTON to convey to XST that BUTTON is driven by P22 - if only I knew of such an attribute. > You are expecting the compiler to figure out that an IBUF connects to the outside world. Yes. and I fail to see why that is a strange expectation. Surely you must agree that, in principle, this could be the case? The compiler is smart enough to infer IPADs (this is special-cased) - why wouldn't it at the same time flag the inferred IPAD as a driver? > You have to tell it that by using a port definition. That is the easy way for sure - and the only one that currently works as far as I can tell. However, It would help me if this wasn't so. Allow me to provide some context on why this would help: I am implementing a Spartan-3 based SoC, using a softcore 6502 processor; I have implemented memory-mapped I/O to the devices available on my development board (RS232, LEDs, BUTTONS, and so on). However, I have three types of development boards (digilent, nuhorizons, memec), that I want to be able to use - all have different strengths and weaknesses (well, the nuhorizons board has only weaknesses - but that's a different matter altogether). I would like to keep the things that are different for each of the boards concentrated in my "memory mapped I/O" VHDL source, for which I have three versions - one for each board. However, since XST needs toplevel ports for all external ports, I am currently forced to have three different versions for two hierarchical levels of design above the "memmapped-io.vhdl" as well, namely my toplevel design (toplevel.vhdl) and the MMU (mmu.vhdl) that routes between the CPU and the RAM, ROM, and I/O apertures. The only reason I need three versions of my toplevel and mmu entities is that I must hook up some external pins into the mmapped_io entity. So this is the background of this. I would be pleased if I could concentrate the development board dependencies in just one VHDL file, which I am currently prevented from doing, since I have to thread all I/O's from toplevel, via the MMU, to the mmapped-io component. Regards, SidneyArticle: 75256
"smu" <pas@d.adresse> wrote in message news:clih72$b8f$1@s5.feed.news.oleane.net... > Hello, > > I am developing a FPGAs (BGA case) board. > > Is it possible to check the connections between two pins on two > different FPGAs with the Boundary scan? If you have multiple pins between the two devices don't forget you can write test designs to pass data back and forward to and from each device multiple times. If you have detectors on each input pin you might be able to read back the status of each, this spotting where any errors are occurring. Nial ------------------------------------------------ Nial Stewart Developments Ltd FPGA and High Speed Digital Design Cyclone Based 'Easy PCI' proto board www.nialstewartdevelopments.co.ukArticle: 75257
Simon Peacock wrote: > > If you have a few spare years it might be able to be explained. The problem > is I don't believe there is anybody here who can explain it. Maybe its like > RDRAM. In theory great.. in practise its moved so slow that advances in > sync logic passed it by. I have always held the idea that if a person can not explain something clearly, then they likely don't really understand it themselves. At lease that was always my problem. :) Back in college I had a roommate who asked why the tides bulged on *both* sides of the earth and not just on the moon side. I kept trying to explain it and finaly realized that I didn't really know how to explain it because I didn't understand what was pulling the tide on the opposite side. Eventually I figured out that it was centrifugal force. I still believe there are *no* things that are hard to understand, only things that are not well understood. And of course, in this case, things that are not really accurate... > The part I've read is "for the same function, async circuits draw less > power, consume 20% more silicon, and run 15% faster." The key here is "for > the same function". I've said before, that a piece of silicon with the same > geometry and same number of transistors running at the same clock rate draws > the same power. It doesn't really matter what its doing. So if you have > 90% of the silicon working for a sync circuit, and 90% working for an async > circuit, there is no saving. The saving is async doesn't run all the time, > no clocks, no nothing. This is where async gets its gains. I'm sorry I > don't have formulas or detailed numbers, but I don't usually design async > logic :-) I understand what you are saying, but in a real world circuit, async devices don't just stop running of their own accord to save power. Consider what is initiating the async circuit. Either it is running in a feedback mode triggering itself when it completes each pass, like a CPU; or it is triggered from an external event, like a clock! In both cases the async circuit runs all the time, in fact an async CPU won't be executing NOPs (and even NOPs require circuits to draw power), it will be running code in a loop if nothing else. It can shut down by executing code to go into a low power state waiting for an external or timer interrupt, but so can a sync CPU. I'm not trying to be a PITA, but no one here has really given this much thought. I keep reading a lot of stuff that is very generalized and does not really describe async vs. sync circuits once you dig a bit. There are differences, such as the clocking method. But they are apples and oranges and until you squeeze them a bit you won't get any juice. What I mean is which one works better depends on how well the details can be optimized. Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 75258
Sidney Cadot wrote: > > I would like to keep the things that are different for each of the > boards concentrated in my "memory mapped I/O" VHDL source, for which > I have three versions - one for each board. > > However, since XST needs toplevel ports for all external ports, I am > currently forced to have three different versions for two hierarchical > levels of design above the "memmapped-io.vhdl" as well, namely my > toplevel design (toplevel.vhdl) and the MMU (mmu.vhdl) that routes > between the CPU and the RAM, ROM, and I/O apertures. The only reason I > need three versions of my toplevel and mmu entities is that I must > hook up some external pins into the mmapped_io entity. Maybe I am being a little dense today, but I still don't understand. If your IO functions are the same why would you need different ports? What is different between the boards that you can't just describe the differences in the IO pin assigments in the UCF file? Even if you don't use a port, don't you still need different top level files? Instantiating the IO pads is a good idea. If that doesn't work nothing will. -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 75259
Rick, I agree with you on this whole ASYNC thing. No-one in this thread has offered any explanation as to why ASYNC should out-perform SYNC circuits, or addressed your concerns. They just offer quotes from academics whose research grants depend on it, or suggest that it's too complex to explain. As you say, after many years of research, the dearth of commercial applications is pretty damning. On the other hand, 4 billion years of natural selection can't be wrong, I'm pretty sure the logic circuit in my head is asynchronous. At least that's what it's telling me now! Brains run on about 20 Watts. Cheers, Syms. "rickman" <spamgoeshere4@yahoo.com> wrote in message news:4185141A.E4005C39@yahoo.com... > > I'm not trying to be a PITA, but no one here has really given this much > thought. I keep reading a lot of stuff that is very generalized and > does not really describe async vs. sync circuits once you dig a bit. >Article: 75260
On Sun, 31 Oct 2004 09:42:06 -0800, Symon wrote: > Rick, > I agree with you on this whole ASYNC thing. No-one in this thread has > offered any explanation as to why ASYNC should out-perform SYNC circuits, or Performance of a sync device depends on the clock rate, which depends on the worst case delays through combinatorial logic and routing delays. For example, if the clock period of a design is determined by the delay through a multiplier array, the time between the completion of a simple addition and the next clock edge could be quite long. Performance of an async device depends, in some sense, on average (rather than maximum) delays, and so the result can be available quite sooner. > addressed your concerns. They just offer quotes from academics whose > research grants depend on it, or suggest that it's too complex to > explain. As you say, after many years of research, the dearth of > commercial applications is pretty damning. Not very damning at all. There are many examples in which superior technologies have failed in the market place, with VHS versus Beta being the standard example, and GAs vs Si another example, and BeOS yet another. Lack of success can be due to a lot of factures unrelated to the technology or product itself. Bad marketing, bad timing, network effect, etc. Factors other than technological merit are quite often the reason that products, technologies, and companies succeed or fail, and using failure as evidence of a lack of technological merit is totally fallacious logic. -- PhilArticle: 75261
rickman wrote: > > Or maybe the tools are not being developed because there are no clear > advantages to async circuits? .. But the tools ARE being developed = that's where this sub thread started. Sure, they are not what FPGA users would call mainstream yet ( and probably will not be for FPGA design ), but Philips are one of the more cautious companies, and they are also in a position of having made real silicon. > > Please explain to me in simple terms where the speed, size and power > advantages come from? I still have not seen it. If you look at Figs 49 thru 52 in the Philips 87C888, you get some idea. The MIPS/Watt values are very good, especially on what was a relatively old process. See also how MIPS/Watt scales with Vcc. Async is not going to displace Sync designs in all areas, but it does illuminate design pathways for lower power. One of those, is Vary of Vcc. Presently FPGA's spec only ONE Vcc, but a recent thread covered an emerging potential for Wider variances on Vcc. This is somewhat innate in the silicon, it just needs the mindset and specs change to use it. -jgArticle: 75262
On Sun, 31 Oct 2004 11:34:34 -0500, rickman wrote: > Simon Peacock wrote: >> >> If you have a few spare years it might be able to be explained. The problem >> is I don't believe there is anybody here who can explain it. Maybe its like >> RDRAM. In theory great.. in practise its moved so slow that advances in >> sync logic passed it by. > > I have always held the idea that if a person can not explain something > clearly, then they likely don't really understand it themselves. At > lease that was always my problem. :) Back in college I had a roommate > who asked why the tides bulged on *both* sides of the earth and not just > on the moon side. I kept trying to explain it and finaly realized that > I didn't really know how to explain it because I didn't understand what > was pulling the tide on the opposite side. Eventually I figured out > that it was centrifugal force. > > I still believe there are *no* things that are hard to understand, only > things that are not well understood. And of course, in this case, > things that are not really accurate... > It seems to me that you want something that is impossible. On one hand, you seem to want an explanation that is reduced to bite-sized slogans. On the other hand, you want to argue with the simplified explanation, picking on details that are to complicated to fit into one sentence. So which do you want - a simple slogan, or a detailed, nuanced discussion? You can't have both. You've had the first, and decided to argue with it. So you must really want the second, in which case you should really take the time to read what much more qualified persons have written about at length in books and formal papers. > There are differences, such as the clocking method. But they are apples > and oranges and until you squeeze them a bit you won't get any juice. > What I mean is which one works better depends on how well the details > can be optimized. > One obvious source of juice is the difference between the longest and shortest combinatorial delays (i.e. flip-flop output delay plus routing delays plus LUT delays plus flip-flop setupt time plus (perhaps) clock skew). The clock period in a sync design is determined by the maximum delay. However, the device still has to wait for an entire clock period even during a cycle when all relevant combinatorial delays are much less than the maximum. This would not be the case in an async design, where the performance of a circuit over a period of time is more likely to be a multiple of the average combinatorial delay rather than a multiple of the maximum combinatorial delay. EMI reduction due to spreading the switching current spikes over time comes for free in an async design, rather than required special clock chips. -- PhilArticle: 75263
>> I'm missing something. Why test the board as compared to >> read the worst case numbers off the data sheet and see if >> they are fast enough? > >Because the data sheet won't tell you how fast your software will run. >Trying to measure the speed of software is very difficult considering >all the permutations of paths that it can take. In DSP work this become >very critical since it often is very much real time. But DSP algorithms >are often are less complex to analyze than control programs or other >tasks that embedded micros are running. ... I'm assuming we have a good data sheet that lists the worst case times for each instruction. That's generally true for simple sync CPUs. It gets more complicated with high performance CPUs. If the program is simple, you can trace the flow. For example, with a DSP system you know how many times you go around the loop in a filter or FFT. For a sync system, you can count cycles. For an async system, you could probably write some software do do the equivalent sort of bookkeeping. What do people do for complicated systems? I'd probably toss a counter into the wait loop and figure out what fraction of the CPU was idle. Maybe make a histogram and see how far out the tail goes. Round up more if the cost of failure is higher. With an async system, I'd expect you could do the same sort of thing. Maybe use timers rather than a spin/poll loop to keep with the low-power philosophy. But now the problem is that you have to correct for temp, voltage, and process. Temp and voltage you can measure. You can probably measure process by running some calibration code. But you still have to add a fudge factor for the software. How important is the software uncertantity relative to the hardware uncertantity? -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 75264
>In Cyclone and Cyclone II, the PLL inputs can only be driving by dedicated >clock pins. The PLL does provide an external output, so I guess you could >route that to the other PLL input on your board. I'm far from a wizard on PLLs, but I remember a lot of troubles in the early SONET days. I think the problem was that the filter on the second PLL let through the noise from the first PLL so the second PLL ended up dancing round much more than you would expect. Maybe it took more than two to cause trouble. -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 75265
On Sun, 31 Oct 2004 14:49:44 -0600, Hal Murray wrote: > With an async system, I'd expect you could do the same sort > of thing. Maybe use timers rather than a spin/poll loop to > keep with the low-power philosophy. But now the problem is > that you have to correct for temp, voltage, and process. > Temp and voltage you can measure. You can probably measure > process by running some calibration code. But you still have > to add a fudge factor for the software. How important is the > software uncertantity relative to the hardware uncertantity? It strikes me that the hardware vendor would characterize and bin the devices in some way that would be meaningful, and provide design tools that would aid in this manner. For a fixed-function device (e.g. UART or fifo) the binning process at the semiconductor vendor would take care of the speed issue in just the same manner as is done today (from the viewpoint of someone incorporating the part in their design). For an instruction-programmable part (i.e. CPU) this would also be the same as for today (parts speed-graded by the manufacturor, user doesn't have any meaningful way to compare parts). And who knows if there will ever be async programmable logic parts, but if there is, again the manufacturor speed-grades the parts and has to provide layout and (worst-case) static timing tools to the designer (no more 'magic' than the current FPGA/CPLD situation). -- PhilArticle: 75266
I installed ISE 6.2i without any service pack on PIII 1GHz. It is running slowly. When I use "Project->new source" and "IP core wizard" source type, CoreGen is supposed to show some components. But the window is empty and no any component. Do I need to buy some components for it? Thanks,Article: 75267
sidney@jigsaw.nl (Sidney Cadot) wrote in message <snip> > > You have to tell it that by using a port definition. > > That is the easy way for sure - and the only one that currently works > as far as I can tell. However, It would help me if this wasn't so. > Allow me to provide some context on why this would help: > <snip> The Designer's Guide To VHDL 2'nd Edition Peter J. Ashenden Page 8 "Using VHDL terminology, we call the module reg4 a design entity, and the inputs and outputs are ports. Figure 1-7 shows a VHDL description of the interface to the entity. This is an example of an entity declaration. It introduces a name for the entity and lists the input and output ports, specifying that they carry bit values ('0' or '1') into and out of the entity. From this we see that an entity declaration describes the external view of the entity." If one were to be able to snake a signal out, without mapping it thru a port, then the entity declaration would not describe the external view of the entity. NewmanArticle: 75268
Hi Phil, Comments added:- "Phil Short" <pjs@switchingpost.nospam.com> wrote in message news:pan.2004.10.31.19.38.29.636872@switchingpost.nospam.com... > On Sun, 31 Oct 2004 09:42:06 -0800, Symon wrote: > > Performance of a sync device depends on the clock rate, which depends on > the worst case delays through combinatorial logic and routing delays. For > example, if the clock period of a design is determined by the delay > through a multiplier array, the time between the completion of a simple > addition and the next clock edge could be quite long. Performance of an > async device depends, in some sense, on average (rather than maximum) > delays, and so the result can be available quite sooner. > However, sync circuits cope with this by just waiting more cycles for the result to appear. The async circuit maybe squeezes the last little bit of performance out, but at the expense of a whole load of handshaking stuff. > > Not very damning at all. There are many examples in which superior > technologies have failed in the market place, with VHS versus Beta being > the standard example, and GAs vs Si another example, and BeOS yet another. > Lack of success can be due to a lot of factures unrelated to the > technology or product itself. Bad marketing, bad timing, network effect, > etc. Factors other than technological merit are quite often the reason > that products, technologies, and companies succeed or fail, and using > failure as evidence of a lack of technological merit is totally fallacious > logic. > Well, that's your opinion. My opinion is that the market is rarely wrong, especially when the technology has been around for decades, and it's a error of judgement to cherry pick one or two examples in the past where marginally better technology failed to disprove this. The exception proving the rule and all that. If async stuff was really 3 times faster and used 50% of the power, as you quoted in a previous post, we'd most likely see a whole lot more of it. Just my opinion! Best, Syms.Article: 75269
Hi all, what is the max frequency you have achieved with TSMC 18um library. I an trying to synthesise a design at 528Mhz(1.8ns). Design Compier results shows that the synthesis timing constraints were met. But when I do netlist simulations I am getting 1000's of timing violations...I checked the verilog simulaton model for the library and found that the setup and hold checks are done in the library for 1ns ... ie for a clock edge I need 1+1 ns for just satisying the setup hold constraints... well my doubt is why the DC tool says the timing constraints are met.. do you have any idea. thanks whizkidArticle: 75270
smu <pas@d.adresse> wrote in message news:<clih72$b8f$1@s5.feed.news.oleane.net>... > Hello, > > I am developing a FPGAs (BGA case) board. > > Is it possible to check the connections between two pins on two > different FPGAs with the Boundary scan? > > If so, exists there a tool that is able to make this kind of test using > the board schematic? > > Thank you in advance. > > smu Intel has an apnote on using jtag scanning with the 386ex. Includes a dos driver and schematic - hw attaches to parallel printer port.Article: 75271
Can I safely connect Xilinx fpga pins to an external connector in a commercial product? I want to be able to control direction and use them in some yet to be determined manner. Will this be a problem for esd testing? Am I asking for someone to zap the most expensive part in my box? What circuitry should I add to protect the fpga? Alan Nishioka alann@accom.comArticle: 75272
sidney@jigsaw.nl (Sidney Cadot) wrote in message > > I would like to keep the things that are different for each of the > boards concentrated in my "memory mapped I/O" VHDL source, for which > I have three versions - one for each board. > > However, since XST needs toplevel ports for all external ports, I am > currently forced to have three different versions for two hierarchical > levels of design above the "memmapped-io.vhdl" as well, namely my > toplevel design (toplevel.vhdl) and the MMU (mmu.vhdl) that routes > between the CPU and the RAM, ROM, and I/O apertures. The only reason I > need three versions of my toplevel and mmu entities is that I must > hook up some external pins into the mmapped_io entity. > This isn't the perfect solution, but in verilog you could do something like use a define to set the board type and then a bunch of ifdef's to monkey with the port lists on the higher level modules. You could even use an ifdef to instantiate the correct hardware specific low-level module. If you don't want to contaminate the top-level files, you can include the portions that are different out of other files. Basically, the idea is to use the pre-processor rather than the HDL's abstraction model to hide the hardware details. I'm sure there's a similar mechanism in VHDL? Okay, it's not perfect, but it works for C programmers writing multi-platform code... (What's really cool is if you run the whole thing with a makefile, you can pass the matching parameters not only to the hdl complier, but also to the compiler of the software that runs on your SoC) Chris PS - these spartan 3 kits are fun... with the addition of one chip and a bunch of resistors, I have one of the digilent ones pulling pictures off an IDE hard drive and generating a color composite NTSC signal to drive a little LCD tv...Article: 75273
Jim Granville wrote: > > rickman wrote: > > > > Or maybe the tools are not being developed because there are no clear > > advantages to async circuits? > > .. But the tools ARE being developed = that's where this sub thread started. > Sure, they are not what FPGA users would call mainstream yet > ( and probably will not be for FPGA design ), but Philips are one of the > more cautious companies, and they are also in a position of having made > real silicon. "Not ... mainstream" is putting it mildly... > > Please explain to me in simple terms where the speed, size and power > > advantages come from? I still have not seen it. > > If you look at Figs 49 thru 52 in the Philips 87C888, you get some > idea. > The MIPS/Watt values are very good, especially on what was a relatively > old process. See also how MIPS/Watt scales with Vcc. What are you comparing it to? My copy of the data sheet is dated 2002 and says superceeds the 2000 version. Most of the chips I would say run with about the same MIPs/W are also 4 years old. MSP430, PIC16, AVR... > Async is not going to displace Sync designs in all areas, but it > does illuminate design pathways for lower power. > One of those, is Vary of Vcc. Varying Vcc reduces power for *ALL* chips. I know for a fact that most of the PIC MCUs are designed for a range of Vcc, often 2.7 to 5.5 volts. In fact the power varies with the square of the voltage since both the current and the voltage change. > Presently FPGA's spec only ONE Vcc, but a recent thread covered > an emerging potential for Wider variances on Vcc. > This is somewhat innate in the silicon, it just needs the > mindset and specs change to use it. This is not at all "innate" in silicon. The chips just need to be designed for a range of Vcc rather than optimized for the best Vcc as FPGAs are. -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 75274
Symon wrote: > > However, sync circuits cope with this by just waiting more cycles for the > result to appear. The async circuit maybe squeezes the last little bit of > performance out, but at the expense of a whole load of handshaking stuff. Actually async circuits also have to leave margin. The "handshake" is timed by delays in the silicon which must be given some margin over the slowest path through that section of the combinatorial logic. So just like you would set your system clock speed a bit slower than the worst case combinatorial delay path, they must do the same, just on a lower level. When the external system has real time requirements, then the async chip must meet that speed requirement at its slowest. Then running faster is of no benefit. You just end up in your idle loop running more cycles and burning more power. > > Not very damning at all. There are many examples in which superior > > technologies have failed in the market place, with VHS versus Beta being > > the standard example, and GAs vs Si another example, and BeOS yet another. > > Lack of success can be due to a lot of factures unrelated to the > > technology or product itself. Bad marketing, bad timing, network effect, > > etc. Factors other than technological merit are quite often the reason > > that products, technologies, and companies succeed or fail, and using > > failure as evidence of a lack of technological merit is totally fallacious > > logic. > > > Well, that's your opinion. My opinion is that the market is rarely wrong, > especially when the technology has been around for decades, and it's a error > of judgement to cherry pick one or two examples in the past where marginally > better technology failed to disprove this. The exception proving the rule > and all that. If async stuff was really 3 times faster and used 50% of the > power, as you quoted in a previous post, we'd most likely see a whole lot > more of it. Just my opinion! > Best, Syms. Actually his examples don't really show anything. Beta vs. VHS was a marketing issue because Sony wanted unreasonable licensing fees and I don't think there *IS* any marketing on async logic. GAs vs. Si is not an issue of one being better, each has their advantages and each is used when appropriate. This whole discussion is getting too long. If there are any facts I would like to hear them. -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAX
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z