Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
I have config you mentioned as 1), plus the TMS, TCK, TDI, TDO at the controller, usually tristated. The project should have a default configuration, hence the EPC2 or another Flash. It should be reconfigurable by the user without being pothered with MaxPlus2 or Quartus. Then new configurations should be reprogrammed into either of them (ACEX/EPC2) by some simple software, best being part of the Application software accessing this board. The project happens to have an Atmel and a USB plus an RS485 on the board too. The JAM player is bigger than the available codespace that should also contain some application software. And the new configuration comes from a PC. It appears to me that since the byteblaster with no intelligence just passes a serialised stream, with little intelligence ( few hundred ASM statements) it should be possible to send the stream in parallel to the controller that serialises it. That parallel stream could come from USB for example. Sounds unreal ? Rene ikauranen wrote: > Hello Rene, > I can't understand the role of microcontroller in this particular > configuration. > > 1) If you use EPC2, you do not need microcontroller. EPC2 configures > ACEX at power-up (in passive serial mode). From Max+2 Programmer you > can either program EPC2, or configure ACEX. The relevant schematic - > AN116, ver. 1.03, fig. 29, page55. In Max+2, you shall setup > multi-device JTAG chain (selecting POF for EPC2 and SOF for ACEX). > > 2) If you use flash memory and microcontroller, you should employ RBF > file and use passive serial configuration scheme (it is easier than > JTAG). > > >>Since the *.pof and the *.sof appear to require interpretation >>by MaxPlus2 or Quartus2, the preferred format would be *.rbf >>it appears. But is the created rbf sufficient for the ACEX as >>well as for the configuration flash ? > > > Actually, RBF content appears in POF at some starting address. The > length of RBF depends on the type of programmable logic device. POF is > a fixed-length file. Its length depends on the type of programming > device (128K+ for EPC1, 200K+ for EPC2). The rest of POF is stuffed up > with 0xFFs. >Article: 50976
john jakson wrote: > Since I brought up the Bio computing post, I have to wonder why there > isn't more interest in using FPGA accelerators to speed up some of the > EDA tools in a general way, ie given a std FPGA board, do a MMX SSE > like check for turbo HW. I know at DAC, you can find any no of > companies pushing HW accelerators but each seems very expensive and > thoroughly proprietory to a vendor and the particular tool they are > speeding up. I have seen Verilog simulation & ASIC emulation engines, > but I can't recall HW used to speed up anything else. If you use one > of those kits, the HW won't help any other task and it also comes in > another big box. > > My XMAS wish would be to see standardized FPGA HW available to the > power developer in a way that could be used for many tasks. It could > mean using say a HandelC flow for some things of less importance, but > for critical bottlenecks, a full blown C to HDL convert. I wouldn't > mind working on such a problem myself, last time I did an ASIC Place n > Route it took a week of cpu, you would think that engineers would want > such a solution. Accelerator fans with a love to C should definitely stay at a DSP. There really are excellent machines running at 400MHz and executing 6 instuctions in parallel. Rene -- Ing.Buero R.Tschaggelar - http://www.ibrtses.com & commercial newsgroups - http://www.talkto.netArticle: 50977
John, Starting on page 178 http://www.support.xilinx.com/publications/products/v2pro/handbook/ug012_ug.pdf You will find the topologies of the four interfaces. The class I and class III are intended for unidirectional high speed interfaces, wheras classes II and IV are intended for bidirectional high speed interfaces (mirror symmetric so they would support a tristate IO for both TX and RX). Classes I and II are weaker, and intended for shorter runs, and III and IV are stronger and are intended for longer (more heavily loaded) runs. Both 1.5V and 1.8 volt versions of all four classes of HSTL exist, and the 1.8 volt versions are just a bit faster than the 1.5 volt versions in most multi-purpose IOs (ie programmable IOs). 1.8 volts came about when many ASIC implementations just didn't work at the intended frequency, so the voltage was increased to make it work. All use a separate Vref supply of 1/2 Vcco at the receiver which is a high speed comparator. All are parallel terminated standards that are suitable for multi-drop runs (in the unidirectional case), and have excellent signal integrity characteristics. The lower voltage swing leads to less cross talk, and less EMI, but not less ground bounce, as the currents are about as large as other strong IO standards. The disadvantage is that external resistors are required (unless you use an internal termination feature, such as the Virtex II and II Pro DCI), and that such parallel termination, internal or external burns power. Many people use HSTL without the resistors on very short runs, but to do so violates the standard, and one must simulate and test to be sure that you will be safe, and the interface will work as intended. Austin John McMiller wrote: > Hi, > What is the main differences between the variuos HSTL I/O technologies: > > HSTL-I > HSTL-II > HSTL-III > HSTL-IV > > ? > > JohnArticle: 50978
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message news:<J7UN9.487484$NH2.34080@sccrnsc01>... > "rickman" <spamgoeshere4@yahoo.com> wrote in message > news:3E07E84E.4E15AAA1@yahoo.com... > (snip) > > > > I always wondered why the Transputer was the way it was. It actually > > put off a lot of people because there were so many things that seemed > > strange about it. Not so much the unique architectural features, but a > > lot of the supporting software and debug things. I don't remember too > > many details, but for example, there was an instruction or assembler > > directive called "guy". Several people remarked on that. So even in a > > company that was embarking on a major project with an array of these > > processors, it was hard to get people to like the idea. > Answer to that is real simple. The guy who did alot of the logic design & microcoding was called Guy. This was just a debug or private op he put in for some testing, later escaped to user land.Article: 50979
I did a study for Xilinx once on accelerating their ppr code. I implemented part of the placer (30% of the performance) and got it to go 9.8x faster. It took about 1500 gates. From my analysis I thought you could put most everything in about 50K-100K gates. They were just coming out with the xc4028 at the time. I just pitched putting all of Mentors EDA tools in hardware but they have a big NIH problem (actually even when they invent something ala Butts et. al. they have a problem;-). I'm sure I could design a system that could sell for about $5K that would speed up Par by 10x-20x but I don't think Xilinx would go for it. It would be about a $2 Million project most of it man power. The offer is on the table!!! Steve www.vcc.com PS/NB DSP chips are no match for FPGAs when it comes to performance. 6 instructions clock is nothing. Most big FPGAs can do the equivalent of 100's of instructions per clock. "Rene Tschaggelar" <tschaggelar@dplanet.ch> wrote in message news:3E086D74.5080305@dplanet.ch... > john jakson wrote: > > Since I brought up the Bio computing post, I have to wonder why there > > isn't more interest in using FPGA accelerators to speed up some of the > > EDA tools in a general way, ie given a std FPGA board, do a MMX SSE > > like check for turbo HW. I know at DAC, you can find any no of > > companies pushing HW accelerators but each seems very expensive and > > thoroughly proprietory to a vendor and the particular tool they are > > speeding up. I have seen Verilog simulation & ASIC emulation engines, > > but I can't recall HW used to speed up anything else. If you use one > > of those kits, the HW won't help any other task and it also comes in > > another big box. > > > > My XMAS wish would be to see standardized FPGA HW available to the > > power developer in a way that could be used for many tasks. It could > > mean using say a HandelC flow for some things of less importance, but > > for critical bottlenecks, a full blown C to HDL convert. I wouldn't > > mind working on such a problem myself, last time I did an ASIC Place n > > Route it took a week of cpu, you would think that engineers would want > > such a solution. > > Accelerator fans with a love to C should definitely stay at a DSP. > There really are excellent machines running at 400MHz and executing > 6 instuctions in parallel. > > Rene > -- > Ing.Buero R.Tschaggelar - http://www.ibrtses.com > & commercial newsgroups - http://www.talkto.net >Article: 50980
Steve, Our software tools have documented improvements of at least 2X in every release for more than four years now (disregards any speed up in computers or operating systems which only improve the performance overall). If we did not do this, our 2V8000 would be unusable! 28K gates vs 8 million ..... 285:1 denisity increase. I agree that a hardware platform to implement the ppr is an exciting concept, and it may be that with our Virtex II Pro we could actually run the latest software under LINUX in a machine made from Virtex II pros, with the place and route (and perhaps other stuff) accerlerated by dedicated gates (similar to our exisiting eXtreme DSP apps). I disagree on the number of gates to do this, however, as with the next generation boasting >2 billion transistors as not just gates and routing, but DCMs, PPCs, MGTs, etc., the problem of growing complexity refuses to even slow down. As for Xilinx NIH, we consider the ppr confidential and proprietary (as it is closely tied to the hardware and architecture), and have determined that it is critical to our success, and that we must own it, and control it completely no less than the IC design itself. The synthesis tools, on the other hand, is not something that we consider critical to our success in that we have to 'own' it. We do have own own synthesis now primarly as a test platform (XST) for new and exciting techniques that we also share with our partners. Xilinx throught its partners has probably the most rational 'NIH system' I have ever encountered: if it is critical, then we state it, and we own it, and NIH is turned into "this is something we must be experts in, and watch how everyone else does it, and always do it better." If we do not consider it critical, then we prefer to partner with someone to whom it is critical, and let them do it for us (why re-invent the wheel when someone else is doing it better). Austin Steve Casselman wrote: > I did a study for Xilinx once on accelerating their ppr code. I implemented > part of the placer (30% of the performance) and got it to go 9.8x faster. It > took about 1500 gates. From my analysis I thought you could put most > everything in about 50K-100K gates. They were just coming out with the > xc4028 at the time. I just pitched putting all of Mentors EDA tools in > hardware but they have a big NIH problem (actually even when they invent > something ala Butts et. al. they have a problem;-). I'm sure I could design > a system that could sell for about $5K that would speed up Par by 10x-20x > but I don't think Xilinx would go for it. It would be about a $2 Million > project most of it man power. > > The offer is on the table!!! > > Steve > www.vcc.com > PS/NB > DSP chips are no match for FPGAs when it comes to performance. 6 > instructions clock is nothing. Most big FPGAs can do the equivalent of > 100's of instructions per clock. > > "Rene Tschaggelar" <tschaggelar@dplanet.ch> wrote in message > news:3E086D74.5080305@dplanet.ch... > > john jakson wrote: > > > Since I brought up the Bio computing post, I have to wonder why there > > > isn't more interest in using FPGA accelerators to speed up some of the > > > EDA tools in a general way, ie given a std FPGA board, do a MMX SSE > > > like check for turbo HW. I know at DAC, you can find any no of > > > companies pushing HW accelerators but each seems very expensive and > > > thoroughly proprietory to a vendor and the particular tool they are > > > speeding up. I have seen Verilog simulation & ASIC emulation engines, > > > but I can't recall HW used to speed up anything else. If you use one > > > of those kits, the HW won't help any other task and it also comes in > > > another big box. > > > > > > My XMAS wish would be to see standardized FPGA HW available to the > > > power developer in a way that could be used for many tasks. It could > > > mean using say a HandelC flow for some things of less importance, but > > > for critical bottlenecks, a full blown C to HDL convert. I wouldn't > > > mind working on such a problem myself, last time I did an ASIC Place n > > > Route it took a week of cpu, you would think that engineers would want > > > such a solution. > > > > Accelerator fans with a love to C should definitely stay at a DSP. > > There really are excellent machines running at 400MHz and executing > > 6 instuctions in parallel. > > > > Rene > > -- > > Ing.Buero R.Tschaggelar - http://www.ibrtses.com > > & commercial newsgroups - http://www.talkto.net > >Article: 50981
okay i got your point, but we are talking of an entire chip here...consisting of several well-defined blocks with properly known signalling/inter-block comm. We need to synchronise the entire chip not on every delta or every clock(probably) since this communication will occur only when absolutely required and with some additonal validating signla.s synchhronisation of signals can be done only on such "events" which can be predetermined for proper operation. This approach to me seems valid for a dataflow/controlflow kind of architectures. And also, to what extent can the network bring down the simulation speed?...are there any figures or stuudies available? Also, the amount of time the OS spends on page switching across the vast amounts of memory the whole chip would otherwise require, it *may* be profitable to account for network tx times in this period. Please comment. regards, Nachiket Kapre Design Engineer Paxonet Communications Inc. johnjakson@yahoo.com (john jakson) wrote in message news:<adb3971c.0212231845.525b018c@posting.google.com>... > nachikap@yahoo.com (Nachiket Kapre) wrote in message news:<eadce17c.0212231014.41e76e89@posting.google.com>... > > thanks a ton for the reply. those docs really helped a lot. Platform's > > way is to spool off several tasks in parallel on several machines but > > it is really pseudo parallel as each indiviual pc runs a unique test. > > At the end of the day, you do get results quickly since multiple > > simuations are running simultaneously. What i intended was to run a > > single test on multiple PCs with the simulation poartitioned. Data > > communication across simulators will then occur through sockets. > > > > regards, > > Nachiket Kapre. > > Design Engineer. > > Paxonet Communications. > > > > "Steve Casselman" <sc@vcc.com> wrote in message news:<y_zN9.1278$sl.104130625@newssvr21.news.prodigy.com>... > > > Sure why not? It is always possible to attempt anything. There are programs > > > that will do distributed simulations... > > > http://www.platform.com/PDFs/whitepapers/MUG_Oct2K1.pdf (this looks like the > > > thing most people do) > > > http://www.avery-design.com/web/simcluster.pdf > > > > > > DO a google search on > > > > > > distributed simulations modelsim > > > > > > > > > Steve > > > > > > > > > "Nachiket Kapre" <nachikap@yahoo.com> wrote in message > > > news:eadce17c.0212220755.431fc33b@posting.google.com... > > > > While simulating a complete ASIC (~5 million gates) consisting of > > > > several individual blocks, is it possible to attempt a concurrent > > > > simulation (functional or timing) in a distributed environment with a > > > > pool of dedicated PCs simulating the individual blocks with > > > > inter-block communication handled by PLI/FLI wrappers in Modelsim > > > > which take care of "forcing" the signals driven by other blocks into > > > > this block? Each individual PC needs to load only a small part of the > > > > whole design and wait for new updates from interacting blocks. Pakcets > > > > keep travelling to and fro between the PCs progressing the simulation. > > > > It may also be possible to avoid IDLE time by allowing the individual > > > > PCs to assume a certain set of inpout values and start simulating, if > > > > later an update arrives that invalidates this assumption, all > > > > subsequent operatins are rerun with these new inoouts and the > > > > corresponding outputs generated invalidated. This will definitely > > > > require mor thinking than can fit in a single email, but how is the > > > > idea for starters?...and has it been tried before ? > > > > It would'nt be wrong to mention that attempting such a simulation on a > > > > single PC would be too tedious and time consuming. > > > > > > > > regards, > > > > Nachiket Kapre. > > > > Design Engineer. > > > > Paxonet Communication Inc. > > I don't believe distributed ASIC/FPGA simulation is best done by > chopping up blocks & distributing across a pool of cpus. The bandwidth > needed to send signals between the blocks would slow down the overall > result considerably. On the other hand if you want to run many smaller > sims of lots of different test cases, then that usually just means > separate license 1 per simulator and some workload SW like Compaq? > has. > > If a very fast cycle C model is available that can run atleast say > 10-1000x faster than HDL simulation, then you could run a C sim for > the no of cycles desired and collect full state say every 1M cycles. > Then a single HDL sim can be split into time chunks of 1M cycles each > per cpu, now there is no communication between cpus save for > collecting the desired detail results. > > Of course if you had a HDL->C compiler, then your C model comes for > free & I assume it would run 10.. faster too but less detailed. > > I personally wouldn't do any project without a V2C compiler. C sims > for most of the work, Verilog for detailed checks. Of course this is > really meant for simpler clock schemes that have predicable time > flows.Article: 50982
Yes, DSP is great in terms of number of instructions that can be performed in parallel, and this is useful in hardware acceleration. However, this is often only useful if the pipeline is full, and this is a distinct challenge. The advantage that I see of a dedicated HW accelerator is that particularly in terms of digital filters that may represent an analog system, any desired bit width can be designed, thus achieving the desired resolution / accuracy of the designer. The main bottleneck that I see is the bus between the PC and the HW accelerator card. BTW, if you look at the TI DSP C6'x family, e.g. the 6211, you'll see in the data sheet that MIPS doesn't really translate to MIPS. There are conditions for these MIPS to be met. Kindest Regards, James Rene Tschaggelar <tschaggelar@dplanet.ch> wrote in message news:<3E086D74.5080305@dplanet.ch>... > john jakson wrote: > > Since I brought up the Bio computing post, I have to wonder why there > > isn't more interest in using FPGA accelerators to speed up some of the > > EDA tools in a general way, ie given a std FPGA board, do a MMX SSE > > like check for turbo HW. I know at DAC, you can find any no of > > companies pushing HW accelerators but each seems very expensive and > > thoroughly proprietory to a vendor and the particular tool they are > > speeding up. I have seen Verilog simulation & ASIC emulation engines, > > but I can't recall HW used to speed up anything else. If you use one > > of those kits, the HW won't help any other task and it also comes in > > another big box. > > > > My XMAS wish would be to see standardized FPGA HW available to the > > power developer in a way that could be used for many tasks. It could > > mean using say a HandelC flow for some things of less importance, but > > for critical bottlenecks, a full blown C to HDL convert. I wouldn't > > mind working on such a problem myself, last time I did an ASIC Place n > > Route it took a week of cpu, you would think that engineers would want > > such a solution. > > Accelerator fans with a love to C should definitely stay at a DSP. > There really are excellent machines running at 400MHz and executing > 6 instuctions in parallel. > > ReneArticle: 50983
Sorry I did not mean that to sound like Xilinx has a problem. I have the greatest respect for Xilinx they are a great company and I believe that some day they will dominate the processor world and beat the snot out of Intel... if they want too... Happy Holidays! Steve > As for Xilinx NIH, we consider the ppr confidential and proprietary (as it is > closely tied to the hardware and architecture), and have determined that it is > critical to our success, and that we must own it, and control it completely no > less than the IC design itself.Article: 50984
This encoding is being used for a LAN sized timing synchronization chain. The system was implemented in the early 1980s, and we are now adding to it. We are calling it Bi-Phase encoding, but I don't know if that is correct. To decode it you can do the following. Where A is normally high, and B is normally the clock (1Mhz). clk_sig <= A xnor B; serial_sig <= '1' when A = '0' else '0' when B = '0' else serial_sig; Basically for each clock of B there is a serial '0', for each clock of A there is a serial '1'; I am now trying to make a device that records the time, based on an external clock, between every pair of incoming serial bytes. The recorded time and byte are stored in memory until a user asks for a memory dump via a standard UART. The recording is turned on and off by two codes, so running out of memory is not a problem. The way I am doing this now does not seem to work, but I don't understand why or how to fix it. Basically I am asynchronously latching each incoming byte, and a generated strobe line. Then in a state machine clocked with the external time clock waiting for the strobe latch to be set. When this happens I latch the time (based on a counter running off the external time clock) and a data_ready latch and wait for a clear signal. In another state machine running at 40Mhz I wait for this data_ready latch, then write the data to RAM and set the clear signal. Then everything starts waiting for the next byte. In simulation everything seems to work, but in hardware it only works sometimes. I am pretty new to FPGAs, so any ideas would help. Thanks, Tim Muzaffer Kal <kal@dspia.com> wrote in message news:<s0uf0v0g745kd88qgf98ct8cqv1an63553@4ax.com>... > On Tue, 24 Dec 2002 18:47:16 +1300, "Ralph Mason" > <masonralph_at_yahoo_dot_com@thisisnotarealaddress.com> wrote: > > >Not a answer, more a question. > > > >What are the advantages of this serial encoding scheme? Is there a specific > >name for this kind encoding? > > > >Perhaps it's something I can put in my toolbox. > > > >Thanks > >Ralph > > > >"Tim" <timdet@san.rr.com> wrote in message > >news:3629ef86.0212231717.152eede1@posting.google.com... > >> Hello, > >> I am working on a project based on a Xilinx Spartan 2. The project > >> inputs serial data encoded on two lines (A and B). The encoding is > >> such that taking the xnor of both lines gives the serial clock. > >> clk_sig <= A xnor B; > >> > >> The project seems to simulate correctly, but when programmed inchip > >> does not work. > >> I receive a warning "Gated clock. Clock net clk_sig is source > >> by a combinatorial pin. This is not good design practice. Use the > >> CE pin to > >> control the loading of data into the flip-flop" > >> which I think could be why it doesn't work when programmed. > >> > >> Is there a better way to do this xnor? > >> I am using this clk signal as the clock for a state machine that is > >> basically a serial to parallel converter. The clock is 1Mhz, but must > >> be derived from the serial data somehow. > >> The rest of the project runs on a 40Mhz clock that waits for the > >> latched serial data. > >> Thanks, > >> Tim > > > > The encoding scheme sounds like 1394 where it's called data strobe > encoding. It's supposed to have better jitter characteristics. Both > strobe and data lines are used to recover clock which might be the > reason why jitter is lower as oppposed to data and clock begin > received on separate wires where there might be a systematic bias on > one or the other. > > > Muzaffer Kal > > http://www.dspia.com > ASIC/FPGA design/verification consulting specializing in DSP algorithm implementationsArticle: 50985
Hi everyone, I have been using Xilinx design manager for years and I liked it as I can organize multiple versions and revisions. Unfortunately, Xilinx has discontinued the design manager starting from ISE 5.1i. And 'ise' didn't give me the same flexibility as the design manager provided. Now, I'm thinking of running Xilinx projects using Makefile. 'ise' provides a command line log file ('<project>.cmd_log') but it is not a complete Makefile. I thought many people have already created such Makefile for Xilinx ISE5.1i. (I just don't want to reinvent the wheel) Could anybody share such Makefile with eveyone? Any feedbacks will be highly appreciated. Best regards, Aki NiimuraArticle: 50986
I recently picked up a altera FlexEPF 10k100 pci card for about $20 at an auction so I wanted to get into fpga dev, is there any freeware or cheap software to program the fpga and what are some good books to get started with fpgas for someone with a programming background ?Article: 50987
Hi, I have just bought some EPM7064LC84-7 and made a JTAG lead also. Which software is better? Choice is Quartus II Web edition or MAX+Plus Baseline, as they're both free and I'm a student (not at university). I am totaly new to CPLDs but have worked on GALs in the past (and found the limitations). From what I understand the MAX+Plus software supports only the MAX devices and the Quartus covers every device, so I don't see a point for the MAX+Plus. I must have that wrong. I downloaded the Quartus a few weeks ago and played with the tutorial and looked good. However on the website it said 5 volt MAX7000S series will be supported in second quarter of 2003. Do I have S series, or is it just plain 7000? I want to be able to do VHDL mainly, schematic entry would be nice to have as option. And the tools to program the actual chip too along with simulation. Thanks for any help, LyndonArticle: 50988
>I am working on a project based on a Xilinx Spartan 2. The project >inputs serial data encoded on two lines (A and B). The encoding is >such that taking the xnor of both lines gives the serial clock. >clk_sig <= A xnor B; > >The project seems to simulate correctly, but when programmed inchip >does not work. >I am using this clk signal as the clock for a state machine that is >basically a serial to parallel converter. The clock is 1Mhz, but must >be derived from the serial data somehow. >The rest of the project runs on a 40Mhz clock that waits for the >latched serial data. I see two potential problems. First, the tools and silicon really expect you to use one of the global clock buffers. If you get a "gated clock" warning, you are probably doing something else. Unless you know what you are doing and are very careful with your non-global clock, you are asking for clock skew problems. Second, you have the standard metastability problem where your slow-clock state machine hands data off to your main clock. (This is probably not a big deal at a 40:1 clock ratio. Just one more thing to keep in mind.) What I would (probably) do: Run each of your data signals through a pair of synchronizer FFs, and then feed them to a state machine. Clock that state machine off your main 40 MHz clock. You can add deglitching logic and/or no-clock logic if you want. The state machine emits a data bit and a data-ready signal. (Or call the data-ready clock-enable and use it to control your current state machine.) -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 50989
In article <3E08A1A1.40193F80@xilinx.com>, Austin Lesea <austin.lesea@xilinx.com> wrote: >I agree that a hardware platform to implement the ppr is an exciting >concept, and it may be that with our Virtex II Pro we could actually >run the latest software under LINUX in a machine made from Virtex II >pros, with the place and route (and perhaps other stuff) accerlerated >by dedicated gates (similar to our exisiting eXtreme DSP apps). >I disagree on the number of gates to do this, however, as with the next >generation boasting >2 billion transistors as not just gates and routing, but >DCMs, PPCs, MGTs, etc., the problem of growing complexity refuses to even slow >down. I also think that there is more to be had in making the placement smarter in the first place, but everyone's heard my datapath rant far far far too many times. :) >As for Xilinx NIH, we consider the ppr confidential and proprietary (as it is >closely tied to the hardware and architecture), and have determined that it is >critical to our success, and that we must own it, and control it completely no >less than the IC design itself. Now here's a question: How big of a sledgehammer does it take to get something new integrated into the PPR backend flow? Retiming is really powerful and belongs in the back end (between placement & routing), as that is where one has the most information etc. >> DSP chips are no match for FPGAs when it comes to performance. 6 >> instructions clock is nothing. Most big FPGAs can do the equivalent of >> 100's of instructions per clock. Andre DeHon has this really nice 3D graph of conventional ISA & DSP vs FPGA, where you have 2 axis: Bitwidth and path length (sequential operations). FPGAs are phenominal for narrow to even fairly wide bitwidth but very short path length. DSPs/CPUS require wider bitwidths and long path-lengths for good efficiency. But with throwing a processor on the die (not a very GOOD processor mind you, but just a decent one) and now you have tolerable efficiency on the long pathlength parts of the operation, which is even more true today thanks to the nearly O(n^2) area requirement for scaling DSPs and CPU performance, but the roughly O(N) area requirement for scaling FPGA performance. -- Nicholas C. Weaver nweaver@cs.berkeley.eduArticle: 50990
Brian Drummond wrote: > On Mon, 23 Dec 2002 17:32:37 -0500, Theron Hicks <hicksthe@egr.msu.edu> > wrote: > > > > > > >rickman wrote: > > >Rick, > > The situation is that the EMI is being generated in the cooling fans for the > >unit itself. Thus the unit is effectively interfering with itself. I have tried > >ferrite beads and capacitors to reduce the posibility of conducted EMI. The > >chassis of the fan is aluminum so it prety much shields itself. However, I tried > >to shield the fan with a screen on top of the fan between the boards and the fan. > > It sounds like the hotwire itself is a receiver antenna... Could be... > > > though how do you distinguish between the acoustic interference from the > fan, and the electrical interference? > In this case, I am pretty sure the problem is the EMI, as when I mechanically remove the fan from the system by damping it's vibrations out with my hand, the noise level did not change. > > Older styles of fan with all-metal construction may provide better > shielding for the motor. Or maybe it's time to look at a belt-driven > fan. > > Or maybe this will help! > http://users.moscow.com/oiseming/lc_ant_p/pic_Prj1.htm > :-) > > - Brian Humm... run the cooling fan of from the excess heat generated via the electrical load. (For those who weren't familiar with this link, it discusses a Stirling engine powered fan.) I suspect it is not practical but it would make a slick solution. It certainly was an interesting web site. Are you a member of that particular antique engine club? Some day I would love to restore an old hit and miss engine or something similar. While I am not old enough to remember the old engines, I sure do enjoy watching them at antique engine/tractor shows (especially the steamers and the big old gas tractors (Advance - Rumley for example)). Thanks, TheronArticle: 50991
It's interesting to note that the higher-class levels of HSTL do not recommend that their VREFs and VTTs be at VCCO/2. Apparently this is due to the non-symmetrical drive characteristics of the real-world HSTL drivers (any insight into this, Austin?). I've used HyperLynx to simulate a QDR ram with Virtex-II. If the VTTs are not set as specified by the HSTL spec, then the simulation shows a distorted duty cycle for the data. Life used to be so much simpler when everything was at 5V. Sigh... Bob "Austin Lesea" <austin.lesea@xilinx.com> wrote in message news:3E087E55.48CF1C6F@xilinx.com... > John, > > Starting on page 178 > > http://www.support.xilinx.com/publications/products/v2pro/handbook/ug012_ug. pdf > > You will find the topologies of the four interfaces. The class I and class > III are intended for unidirectional high speed interfaces, wheras classes > II and IV are intended for bidirectional high speed interfaces (mirror > symmetric so they would support a tristate IO for both TX and RX). > > Classes I and II are weaker, and intended for shorter runs, and III and IV > are stronger and are intended for longer (more heavily loaded) runs. > > Both 1.5V and 1.8 volt versions of all four classes of HSTL exist, and the > 1.8 volt versions are just a bit faster than the 1.5 volt versions in most > multi-purpose IOs (ie programmable IOs). 1.8 volts came about when many > ASIC implementations just didn't work at the intended frequency, so the > voltage was increased to make it work. > > All use a separate Vref supply of 1/2 Vcco at the receiver which is a high > speed comparator. > > All are parallel terminated standards that are suitable for multi-drop runs > (in the unidirectional case), and have excellent signal integrity > characteristics. > > The lower voltage swing leads to less cross talk, and less EMI, but not > less ground bounce, as the currents are about as large as other strong IO > standards. > > The disadvantage is that external resistors are required (unless you use an > internal termination feature, such as the Virtex II and II Pro DCI), and > that such parallel termination, internal or external burns power. > > Many people use HSTL without the resistors on very short runs, but to do so > violates the standard, and one must simulate and test to be sure that you > will be safe, and the interface will work as intended. > > Austin > > John McMiller wrote: > > > Hi, > > What is the main differences between the variuos HSTL I/O technologies: > > > > HSTL-I > > HSTL-II > > HSTL-III > > HSTL-IV > > > > ? > > > > John > >Article: 50992
>In this case, I am pretty sure the problem is the EMI, as when I mechanically remove >the fan from the system by damping it's vibrations out with my hand, the noise level >did not change. Try some dryer/vent hose or big-enough PVC pipe, and mount the fan a long distance from your gear. EMI should be 1/R-squared. -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 50993
Hi, I am looking for more information about "Parallel Automatic Synchronization System (PASS)" block in HDMP-1034 gigabit tx/rx IC. Its datasheet is very short on this and says when PASS system enabled, data (RX) clocked on RISING-EDGE of REFCLK. This method is work fine but is not standard. In this way we can not build a standard full duplex data link (?) how we can use from this feature? I want to use HDMP-1032/34 to build a fiber optic link for a telecom. system. How can I interface it with telecom side? In telecom. systems data send on rising edge of clock and received on falling edge of clk. but when falling edge occured data is not ready in HDMP-1032. Please guide me about this. Best regards. Masoud NaderiArticle: 50994
Aurash Lazarut <aurash@xilinx.com> wrote in message news:<3E084E50.30D2E989@xilinx.com>... > Muthu, > > DCMs are located on the top and bottom of the bram column (on the IOB > ring) if you stay with the mouse on these resources in graphical rep. of > the die, you can see the coordinates (the same in fpga_editor) > Hope this helps, > Aurash Hi Aurash, Thanks. If we use more than 8 BUFG, then manual placing of BUFG is required. But what should be the approach. Should we LOC the DCM first? where can i find more details? Thanks and regards, MuthuArticle: 50995
Austin, ThankX for the detailed response. The one remaining question from my side is if there an operation frequency range difference between the various HDTL types ? John. Austin Lesea <austin.lesea@xilinx.com> wrote in message news:<3E087E55.48CF1C6F@xilinx.com>... > John, > > Starting on page 178 > > http://www.support.xilinx.com/publications/products/v2pro/handbook/ug012_ug.pdf > > You will find the topologies of the four interfaces. The class I and class > III are intended for unidirectional high speed interfaces, wheras classes > II and IV are intended for bidirectional high speed interfaces (mirror > symmetric so they would support a tristate IO for both TX and RX). > > Classes I and II are weaker, and intended for shorter runs, and III and IV > are stronger and are intended for longer (more heavily loaded) runs. > > Both 1.5V and 1.8 volt versions of all four classes of HSTL exist, and the > 1.8 volt versions are just a bit faster than the 1.5 volt versions in most > multi-purpose IOs (ie programmable IOs). 1.8 volts came about when many > ASIC implementations just didn't work at the intended frequency, so the > voltage was increased to make it work. > > All use a separate Vref supply of 1/2 Vcco at the receiver which is a high > speed comparator. > > All are parallel terminated standards that are suitable for multi-drop runs > (in the unidirectional case), and have excellent signal integrity > characteristics. > > The lower voltage swing leads to less cross talk, and less EMI, but not > less ground bounce, as the currents are about as large as other strong IO > standards. > > The disadvantage is that external resistors are required (unless you use an > internal termination feature, such as the Virtex II and II Pro DCI), and > that such parallel termination, internal or external burns power. > > Many people use HSTL without the resistors on very short runs, but to do so > violates the standard, and one must simulate and test to be sure that you > will be safe, and the interface will work as intended. > > Austin > > John McMiller wrote: > > > Hi, > > What is the main differences between the variuos HSTL I/O technologies: > > > > HSTL-I > > HSTL-II > > HSTL-III > > HSTL-IV > > > > ? > > > > JohnArticle: 50996
Lyndon Amsdon wrote: > Hi, > > I have just bought some EPM7064LC84-7 and made a JTAG lead also. > Which software is better? > > Choice is Quartus II Web edition or MAX+Plus Baseline, as they're both > free and I'm a student (not at university). I am totaly new to CPLDs > but have worked on GALs in the past (and found the limitations). > > From what I understand the MAX+Plus software supports only the MAX > devices and the Quartus covers every device, so I don't see a point > for the MAX+Plus. I must have that wrong. > > I downloaded the Quartus a few weeks ago and played with the tutorial > and looked good. > > However on the website it said 5 volt MAX7000S series will be > supported in second quarter of 2003. Do I have S series, or is it > just plain 7000? > > I want to be able to do VHDL mainly, schematic entry would be nice to > have as option. And the tools to program the actual chip too along > with simulation. > > Thanks for any help, Lyndon Quartus is the successor of MaxPlus2.Future development is put into Quartus. It is also somewhat more intuitive in most aspects. So if possible use Quartus. Rene -- Ing.Buero R.Tschaggelar - http://www.ibrtses.com & commercial newsgroups - http://www.talkto.netArticle: 50997
Hi, I'm compiling some VHDL with Synplify Pro 7.2.1, and it comes up with the error message: "While loop is not terminating? Try increasing the condition evaluation effort level with syn_evaleffort attribute" I have a while loop in a function in a declarative region in the code, and this loop may have millions of iterations. The result of the function is used to initialise a constant. This atribute is not mentioned in the help, but it does appear in synattr.vhd, with the comment "syn_evaleffort is used on modules to define the effort to be used in evaluating conditions for control structures. ... The default value is 4. ... attribute syn_evaleffort : integer; -- an integer between 0 and 100 {noscope}" Questions: What do the values (0 to 100) mean? What value should I use to guarantee that the synthesiser won't produce a spurious error message? Do I apply the attribute to the function containing the loop, the constant getting initialised to the result of the function, or the architecture containing it all? TIA, Allan.Article: 50998
> > > > Accelerator fans with a love to C should definitely stay at a DSP. > > There really are excellent machines running at 400MHz and executing > > 6 instuctions in parallel. > > > > Rene For general non DSP codes, a crude comparison of 3GHz P4 v 400MHz DSP, and I think I would just stay with the P4, its already in the PC at no real design cost and is FP cheap. Duals are not expensive either. Perhaps FPGA accelerators for FPGA PPR is questionable since the internals are so proprietory. Ff the 6200 model had grown & stayed open, then it would be a different story since every Tom Dick & Harry wrote tools for that and you couldn't stop outsiders from doing same in HW. But the ASIC world has much bigger (more interesting?) problems than the FPGA world, I would hazard at least 10-100x bigger CAD problems since they are generally dealing with a much larger nos of much simpler cells, gates, flops etc, many layers of interconnect, and 45deg too. All the ASIC vendors are dealing with the same basic open methodology with mostly outside tools with well known algorithms. Hence if Mentor, Cadence, Avanti/Synopsys wanted to get a big competetive jump over the other guys, an FPGA accelerated EDA suite should give a huge advantage. Interesting Steve that you approached Mentor on this also. But the way Mentor & Cadence are duking it out on emulators patents doesn't bode well for putting FPGAs into front line of other EDA tools. Then all the divisions profits would just go to the lawyers. So question, if I buy a monster FPGA board and implement some text book EDA HW, who's going to suit me 1st.Article: 50999
Hi Rene, > Quartus is the successor of MaxPlus2.Future development is put into > Quartus. It is also somewhat more intuitive in most aspects. > So if possible use Quartus. Thanks, I read somewhere else that this was true, means I don't have to download again as 100 megabytes on a 56k modem is a fair amount!
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z