Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
"dp" wrote: >Phil Hays wrote: >> Caution: Contents may contain sarcasm. >with all the sarcasm included. Oh dear. Please excuse me. I use several different signatures depending on the topic at hand. Please accept my statement that I didn't intend to use this signature for this discussion. -- Phil HaysArticle: 96426
I've used quartus_pgm in a makefile successfully some time (and perhaps some Quartus version) ago, but now I get a strange error: Info: Command: quartus_pgm -c ByteBlasterMV -m JTAG -o p;quartus/cycconf/cyc_con f_init.pof Info: Using programming cable "ByteBlasterMV [LPT1]" Error: Quartus II Programmer was unsuccessful. 0 errors, 0 warnings Error: Processing ended: Fri Feb 03 16:59:16 2006 Error: Elapsed time: 00:00:01 The JTAG chain contains two devices: one MAX-PLD and one Cyclone FPGA. I just want to program the PLD. What's wrong with this command? BTW: with jbi32 it works: jbi32 -dDO_PROGRAM=1 -aPROGRAM ..\..\jbc\cycmin.jbc MartinArticle: 96427
Hi When I remove following line in UCF file, INST "clock_ibuf" LOC = "BUFGMUX4S" ; there is no error encountered in the MAP stage. But, in PAR stage, following error occurred. -------------------------------------------------------- ERROR:DesignRules:576 - Netcheck: The signal clock_IBUF has a sigpin on the comp Q1_12_OBUF that is not in the same route area as another sigpin of the same signal. This is not permited for Modules in partial reconfiguration mode unless the signal has the property IS_BUS_MACRO. ERROR:DesignRules:10 - Netcheck: The signal "clock_IBUF" is completely unrouted. -------------------------------------------------------- I found Answer in Answer browser ------------------------------------------------------- This error occurs when signals attempt to route across a reconfigurable module boundary without going through a bus macro. Please ensure that any signal crossing the reconfigurable module goes through a bus macro. ------------------------------------------------------- It is unclear for me what is wrong and why the tool tries to use "IBUG" instead of "IBUFG". If someone gives any suggestion, it will be nice. Thankyou in advanceArticle: 96428
On 3 Feb 2006 06:33:45 -0800, "dp" <dp@tgi-sci.com> wrote: >Brian Drummond wrote: >> ... >> >the only way a simulator can see DC current >> >resulting from a static magnetic field is a software bug >> >or, worse, misconcepted basics behind the software. >> >> I don't think he's talking about the magnetic field generating a DC >> current; but modifying the path of one that exists from other causes >> (the PSU). >> >> Think about those moving electrons (beta particles) in a particle >> detector; a static magnetic field certainly modifies their path. >> (Thus you can determine their velocity from its radius) > >I really regret I have to go back to this thread. I suggest everyone >posting more on this nonsense makes sure to consult at least >some high-school physics books first. > When the electrons move inside a conductor (metal), this effect >is seen as a mechanical force applied to the _conductor_. Think about HOW that force is applied to the conductor. Think about whether the presence of current (motion of electrons) is significant in this process, or whether, as your version suggests, the magnetic field and the conductor alone are sufficient. >It takes >electric rather than magnetic field to move electrons inside the >conductor. That was never in debate. But once they are in motion, what happens? - BrianArticle: 96429
On 3 Feb 2006 04:46:39 -0800, "JL" <kasty.jose@gmail.com> wrote: >Hello all, > >I have a problem with internal tristates in Xilinx Virtex-2. They >basically don't exist, although are widely used in the documentation. >Imagine you have N dual port rams with the enable inputs attached to >the output of a tristate like this: >Other solution would be to make the pre-synthesis simulator see a 'Z' >as a '1'. We could achieve this with a pullup component, although I >never got it to work in the past. I wonder if there is another way to >make the simulator understand a 'Z' as a '1'. One option is to permanently drive the tri-state bus with "H". - BrianArticle: 96430
Uwe, I am working on that. Thanks. Austin Uwe Bonnes wrote: > austin <austin@xilinx.com> wrote: > >>Further, > > >>Anyone who can point to a clear and simple explanation, please do. > > >>When I first mentioned this to our packaging group, the lead engineer >>said "oh yes, I see this in the EM simulations..." > > >>So, I know I am not imagining it! > > > Perhaps put some simulation results online, with an explanation of the > input data. With simulation GIGO (garbage in, garbage out) easily comes > into play.Article: 96431
Paul, The latter (b). They do carry current, but it is falling off as 1/r or 1/r^2 (I just can't remember which). The BART rails had 2/3 nearest the power rail, and 1/3 in the rail furthest. Which makes me think it was 1/r, not 1/r^2. Also, BART has shorting links every X meters that ties the two rails together (now) to lessen the return resistance (improve efficiency). I think I was told that the inner 2X2 balls had 1/8 to 1/16 the current...but it may have been more (or less). As I already said, I will post some results (when I find them). Austin Paul Johnson wrote: > So, what's the answer? Either > > a) The central balls "carry no current at all", are isolated from the > die GND, and are just for thermal conduction, or > > b) They are connected to the die GND, they do carry some return > current, although less than the GND pins at the edge, and their > decoupling is still important, but not very important?Article: 96432
Marc, I was only talking about flip chip. The die vias, metal, bumps, planes, vias and balls are all metal. They conduct heat very well. They are very shot lengths. The epoxy and pcb material also is a great conductor of heat. The grease and copper top is good, but not as good. Especially after 800 microns of glass (the backside of the die). Further to go. And then one more interface to the air (teriible) or to a heatsink (also may not be very good). For all the details, you would have to contact your FAE and have them discuss a particular case with our packaging department. Austin Marc Randolph wrote: > Austin Lesea wrote: > >>All, >> >>More heat is conducted out the bumps, through the substrate, through to >>the pcb than through the backside heat spreader (without a heatsink). > > > Howdy Austin, > > Could you provide a bit more detail on this? UG112 seems to say that > theta JB varies too much from situation to situation to be worth > publishing. If it has even more impact on cooling the device than the > theta JC, it seems like more information should be provided. > > Furthermore, how much closer to 0 degC/W could the thermal resistance > be, compared to the ~0.6 degC/W of the flip-chip packages? Or were you > referring to everything except flip-chip? > > >>Even with a heatsink, as much as half of the power is going through to >>the pcb. >> >>I know that is hard to believe, but the heat is much closer to the >>bumps, the bumps are metal (ultra low alpha lead), and they go directly >>to a copper plane in the substrate (package pcb). FR4 and epoxies are >>pretty good at conducting heat. >> >>The lead balls to the copper pcb completes the (best) heat conduction path. > > > Isn't the heat spreader on the flip-chips also copper? It seems like > going through one tiny layer of thermal grease and one layer of > heatspreader would have less thermal resistance than bumps + epoxy + > substrate + ball + pad + via + ground plane. I don't see mention that > the substrate has a substantal amount of copper in it, but that doesn't > mean it isn't there and just not well documented. > > >>The backside of the die is almost 1 mm of SiO2 away from the area that >>is hot, and has to then go through a thermal compound to get to the top >>heat spreader, and then has to be mechanically bonded to a heatsink (if >>you really want to get power out of the top of the package). > > > I think you are referring to non-flip-chip here? > > Thank you! > > Marc >Article: 96433
Thanks Brian, that works. Just one hint: the line driving the bus to 'H' must be enclosed between "synopsys translate_off" and "synopsys translate_on" metacommands. Like this: g_loop: for x in 0 to N-1 generate nEnable(x) <= something(x) when (connect(x) = '1') else 'Z'; end generate; -- synopsys translate_off g_loop: for x in 0 to N-1 generate nEnable(x) <= 'H'; end generate; -- synopsys translate_on If you fail to keep the extra code away from synthesis, Xilinx XST will complain about many sources driving a wire. Thanks. Jose.Article: 96434
As a purely technical or technological subject, this comparison is meaningless, for every FPGA is actually designed as an ASIC. The difference between FPGA and ASIC, and the reason why the former is growing and the latter is lingering, is economics. Not too many potential ASIC users can afford to invest $50 M or 200 man-years in the development of a state-of-the-art ASIC, but Xilinx and Altera can, and do. They design a basic FPGA that can easily be "step-and-repeated" to generate a whole FPGA family that over its life (of not too many years) results in sales of several Billion dollars. That business model works very well for Xilinx and Altera. Ask any investor in LSI Logic how well the ASIC business is doing... When Paul writes: > Another factor to take into account is that the FPGA vendor's > cutting-edge devices - the first ones on 90nm, for example - are > invariably large, expensive, and low yield, and so probably not useful > to most customers. So, it could even be argued that the ability to > take advantage of new processes isn't actually that useful anyway. I suggest to look at the Spartan-3 family, our highest-running 90-nm family. Few designers would call Spartan-3 "large, expensive, and low yield,,,and not useful for most customers" Every new process has a learning curve, but that works in favor of FPGAs, since only they create the enormous volumes that drive yield up and cost down. In short, FPGA vs ASIC is not a question of technology, but largely of economics and risk tolerance. ASICs are for extreme applications: extreme quantity, extreme complexity, extreme speed, and extremely low power. In most other applications, FPGAs (or other standard parts) are an increasingly popular alternative. I see the FPGA, microprocessor, and ASSP as the obvious choices by default, ASIC is the exception that must be justified in each individual case. But then I admit to being biased... Peter Alfke, XilinxArticle: 96435
HI, I am very new to Quartus(5.1 Solaris) and looking for simulation libraries for both Verilog and VHDL, vhdl seams to be found in the install tree but verilog? When I used Xilinx I always started compiling a simulation libraries once for all into the installation tree for each simulator to use, how is this done i Quartus? cheers MichaelArticle: 96436
I've recently read some articles stating that a realy fast place and route algorithm for fpgas would be a good thing. In this case, "really fast" means sub-second or even sub-millisecond. For what kind of applications would one need such fast "compile" times? Once one has done the compiling, How fast can one program an fpga anyway?Article: 96437
On 3 Feb 2006 09:30:13 -0800, "Michael Hennebry" <hennebry@web.cs.ndsu.nodak.edu> wrote: >I've recently read some articles stating >that a realy fast place and route algorithm >for fpgas would be a good thing. >In this case, "really fast" means sub-second >or even sub-millisecond. >For what kind of applications would one >need such fast "compile" times? >Once one has done the compiling, >How fast can one program >an fpga anyway? I suppose that if you can p&r really fast, the tool can try lots and lots of different placements to see which is best, giving better results more of the time.Article: 96438
Michael Hennebry wrote: > I've recently read some articles stating > that a realy fast place and route algorithm > for fpgas would be a good thing. > In this case, "really fast" means sub-second > or even sub-millisecond. > For what kind of applications would one > need such fast "compile" times? Compile time is a completely different issue. In using an FPGA as a processor (for netlists) the practice of dynamic linking becomes just as useful for FPGA's as it is for processors. Consider that your operating system is loaded with hundreds/thousands of dll's and .so objects for libraries and small modules. It's actually more useful, as the amount of "memory" (AKA LUT's) in an FPGA is smaller, a LOT smaller, so the practice of swapping/paging in smaller netlist seqments will be a necessary tool to avoid completely reloading an entire FPGA image in the form of a freash bitstream. > Once one has done the compiling, > How fast can one program > an fpga anyway? How many times do you compile a program? ... How many times do you execute/dynamically link an object to run it? Link and go times are much more important than compile times, except maybe for very large programs that a programmer is debugging. Loading an entire bit stream is very time intensive ... loading a few collums is a LOT faster.Article: 96439
dp wrote: > When the electrons move inside a conductor (metal), this effect > is seen as a mechanical force applied to the _conductor_. It takes > electric rather than magnetic field to move electrons inside the > conductor. This is how electric motors work. You _cannot_ affect > the path of the electrons inside the conductor by a static magnetic > field, just as you cannot force them to exit the conductor. If true, then why do superconductors have a critical magnetic field level. Moving electrons are influenced by magentic fields, the key question, is how much ? -jgArticle: 96440
Austin Lesea wrote: > As I already said, I will post some results (when I find them). I've given seminar talks for the last 20 years pressing designers to constantly reevalutate the underlying assumptions in a design, as they frequently change, and with small invalidations in the foundation, the whole design can, and does, fail. Actually to document them, and well. As a consultant tackling failed projects, one reoccuring theme when I started probing the design/architecture was asking questions about the assumptions and getting the "everybody knows ..." answer. The "we have always done it that way, and it works ..." answer. Well, why is it now broke? This is another case of "everybody knows", that will be fun to add to my on going talk, "It's not what you know that will hurt you, it's what you think you know" as a case study :)Article: 96441
fpga_toys@yahoo.com wrote: > Michael Hennebry wrote: > > I've recently read some articles stating > > that a realy fast place and route algorithm > > for fpgas would be a good thing. A VERY GOOD P&R is necessary to optimize a hardware design. A VERY FAST P&R that is pretty good, is necessary to use an FPGA as a netlist processor.Article: 96442
Austin Lesea wrote: > Paul, > > The latter (b). > > They do carry current, but it is falling off as 1/r or 1/r^2 (I just > can't remember which). > > The BART rails had 2/3 nearest the power rail, and 1/3 in the rail > furthest. Which makes me think it was 1/r, not 1/r^2. > > Also, BART has shorting links every X meters that ties the two rails > together (now) to lessen the return resistance (improve efficiency). Hmmm.... > I think I was told that the inner 2X2 balls had 1/8 to 1/16 the > current...but it may have been more (or less). Hmmmm.... > As I already said, I will post some results (when I find them). Please do, we can agree there is an effect, my antennae just question how much of an effect at DC ?. You still have to satisfy ohms law, so any push effects that favour flow, have to model somehow as mV(uV) generators.... To skew Ball DC currents 7/8 or 15/16, frankly sounds implausible, and maybe the models there forgot to include resistance balancing effects ? [ ie do not believe everything you are 'told' ] -jgArticle: 96443
I am interested in studying the implementation of simple microprocessors and microprogrammed/microcoded machines and would like some literature pointers. I still have my university text, "The Architecture of Microprocessors" by Francois Anceau... but I find it as hard to read now as I did back in '88. Any recommended texts? Thanks, Paul.Article: 96444
Peter Alfke wrote: > As a purely technical or technological subject, this comparison is > meaningless, for every FPGA is actually designed as an ASIC. Nitpick, but why does the "Application Specific" in "ASIC" apply to an FPGA and not to a Pentium, when the FPGA nails down the specificity of application far less than a processor would?Article: 96445
<fpga_toys@yahoo.com> wrote in message news:1138992533.556317.212590@g49g2000cwa.googlegroups.com... > > As a consultant tackling failed projects, one reoccuring theme when > I started probing the design/architecture was asking questions about > the assumptions and getting the "everybody knows ..." answer. The > "we have always done it that way, and it works ..." answer. Well, why > is it now broke? > That's the monkey story. http://www.wowzone.com/5monkeys.htm Monkeys are funny. :-) Cheers, Syms.Article: 96446
Hi Jeff Jeff Shafer wrote: > Question summary: Can I successfully share a Xilinx dual-port BRAM between > two PowerPC data-OCM's where it is possible to write and read the same > location at the same time without corrupting the data? (by different > processors from different ports of the BRAM) I don't care if the read > returns the old data or the new data from the write, but I want the read > results to be deterministic and repeatable without corrupting the write. As I understand, you use 1 port always for read and another always for write. So you could clock the write port on the negative edge even while keeping the two processors and their plb bus on the same clock. Let's says clk is your main clock. Instead of feeding di,dip and wren directly to the BRAM, register them in the clk domain, then connect the output of these registers to the respective port of the BRAM and feed "not clk" to the wrclk ping of the BRAM. SylvainArticle: 96447
On 3 Feb 2006 06:33:45 -0800, "dp" <dp@tgi-sci.com> wrote: >Brian Drummond wrote: >> ... >> >the only way a simulator can see DC current >> >resulting from a static magnetic field is a software bug >> >or, worse, misconcepted basics behind the software. >> >> I don't think he's talking about the magnetic field generating a DC >> current; but modifying the path of one that exists from other causes >> (the PSU). >> >> Think about those moving electrons (beta particles) in a particle >> detector; a static magnetic field certainly modifies their path. >> (Thus you can determine their velocity from its radius) > >I really regret I have to go back to this thread. I suggest everyone >posting more on this nonsense makes sure to consult at least >some high-school physics books first. I've got a Physics degree, and I expect and I'm not alone here. Although, I have to admit, that was over 20 years ago... > The electrons (or beta particles, the origin does not matter) They're the same thing >can be moved in vacuum or in a gas because they, when moving, >produce a magnetic field, which interacts with the static >magnetic field, exactly in the same way as two magnet pieces >interact with each other (i.e. results in a force applied to the >freely moving electron). What is a 'static' magnetic field? This is, I think, your confusion; see my other post. Two things: (1) - Maxwell's equations are concerned with the rate of change of a magnetic field; not whether they're 'static', or 'moving', which is meaningless, and (2) magnetic and electric fields are exactly the same thing; it just depends on you frame of reference. One observer sees only an electric field; another observer moving relative to the first observer sees a magnetic field. > When the electrons move inside a conductor (metal), this effect >is seen as a mechanical force applied to the _conductor_. What's the difference? Certainly, a conductor is different from free space; this is what the skin effect is about. But, in this case, nothing happens when moving from free space into a conductor: there are moving charged particles either in the conductor or in free space; a force is applied to them. They respond to the force, and not by leaking out of the conductor.Article: 96448
Hi, Nju Thank you for the reply. I don't know whether your code in last thread derived from the user_logic.v generated by the Create and Import Peripheral Wizard. And I am currently reading this code. In this code, there is a port named IP2IP_Addr, and in your last thread, you first connected with the IP2Bus_Addr and then you revised it to your local target memory. What I am not clear is that how to instantiate the "local target memory" which is compatible to the IPIF? Should I instantiate a PLB_BRAM block? Besides the above, in the user_logic.v, there is a 16 bytes flattened registers(mst_reg(0:15) in the sample code) which are used for the control from the software side. Meanwhile, in the sample code, each of this address is assigned a specific address, e.g. IP2IP register is located at C_BASEADDR+0x104. And I saw another reference design in which software could read these register directly by the address. So I just wonder whether I could allocate some other registers in my hardware and specify the address myself. If it could be done, then for writing, I could just store my data in these registers first and then assign these registers' addresses to the IP2IP_Addr. But I don't find anything on how to specify the registers' addresses in the sample code. So now, could you tell me how did you do to use a "local target memory"? And do you know whether the second method would work or not? Thank you Roger Nju Njoroge wrote: > Hello, > > Please refer to this thread: > http://groups.google.com/group/comp.arch.fpga/browse_frm/thread/4373c26ee4c38328/aebfcf5e6f06f52c?lnk=st&q=PLB+Master&rnum=1&hl=en#aebfcf5e6f06f52c > > (Google "PLB Master" in Googe Groups and this will be your first hit). > > Properly using the IP2IP_Addr in the IPIF is what allowed my master to > work properly. > > Good luck, > > NN > > > agou wrote: > > Hi, group > > > > I generated a IPIC interface by the Create and Import Peripheral Wizard > > > > to access PLB_DDR blockon the PLB bus. > > > > I chose the DMA, user logic Master Support mode. And then try to > > develop my own logic based on the generated files. Here, I have one > > problem: > > > > To write to an address on PLB bus, I need to provide two addresses: > > IP2IP_Addr which stores the source data and IP2Bus_Addr > > to which writes the data. Do I need to instantiate a BRAM in the FPGA > > to provide the source address? > > > > What I am not clear is whether the BRAM is compatible the IPIC logic. > > Or do I have to instantiate another PLB_Bram and then hook it up to the > > PLB? Are there any other simple method? > > > > Thank you for the help. > > RogerArticle: 96449
Well, acronyms are sometimes silly. RAM says random access (which every ROM has), and should really be Read/Write Memory. ASIC seems to stand for a design that is specifically designed for/by one customer, while an ASSP is a standard part for a popular function. Don' read too much into the TLA (three-letter-acronym). Peter Alfke
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z