Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
> > So what I need is a few dozen Pentium clockcycles to give the FPGA a task, > > which is examining a pattern on a small grid, then I need a boolan answer > > from the FPGA. > > I need tens of millions of those boolean answers per second, otherwise it > is > > not a viable idea. > > > > What I need is a way to interface with the FPGA like I was accessing > > Pentium cache memory, > > or I need to have a FPGA + processor board, which I could feed a much more > > complex task. > > The AGPx4 slot is much faster than PCI of course, but I do not know a lot about latency and other problems. -- Best Regards Ulf at atmel dot com These comments are intended to be my own opinion and they may, or may not be shared by my employer, Atmel Sweden.Article: 41901
"jeff" <jeff@none.com> schrieb im Newsbeitrag news:3CB3CDC1.5F92@none.com... [ trouble with Spartan-II board ] At first, your design flow sounds correct. But have you also jumpered the DIN pin correctly. You can choose between DIN feed by the PROM of serial download cable. But remember. If you are generating a *.bit file which is later converted to *.mcs for programming into the FLASH, you need to select CCLK as startup clock. IMPACT will complain because it finds your *.bit file with a CCLK startuip clock, but just ignore this. Click Initialize Chain and programm your *.mcs into the FLASH. JTAG as startup clock is only required (possible) if you are directly downloading to the FPGA via JTAG. -- MfG FalkArticle: 41902
"Frank de Groot" <franciscus_andreas@hotmail.com> schrieb im Newsbeitrag news:3cb4172e$1@news.wineasy.se... > So what I need is a few dozen Pentium clockcycles to give the FPGA a task, > which is examining a pattern on a small grid, then I need a boolan answer > from the FPGA. > I need tens of millions of those boolean answers per second, otherwise it is > not a viable idea. So in this case it is neccessary (and possible) to move almost the whole algorithm into the FPGA. Just feed in the data, the FPGA does all processing/storing of results and after it is done, it will tranfer the results back to the CPU (the normal PC CPU). There are many FPGA boards available with SRAM/SDRAM. > What I need is a way to interface with the FPGA like I was accessing Pentium > cache memory, > or I need to have a FPGA + processor board, which I could feed a much more > complex task. Thats the way to go. Dont treat the FPGA a a dumb Ccoprocessor, more than a second powerfull (almost) stand alone processor. > Sorry it took me such a long time to realize that. I am starting to think > that current technology is not > ripe for a low-cost FPGA coprocessor in a PC that features I/O like it was > normal memory (preferably cache even). It depends on the application and, much more, on the expirience and creativity of the programmer. > When will Intel put a FPGA in its processors? Wouldn't that be the absolute > killer app? Not today. And also not tommorow. But isnt the instruction (microcode) set of the Pentium III / IV software configurable. I remember that the removed some bgs this way. A small step to the FPGA coprocessor. -- MfG FalkArticle: 41903
"Frank Zampa" <NOSPAMingzampa77@yahoo.com> schrieb im Newsbeitrag news:3cb447de.29279501@news.inet.it... > Hi, i'm starting with a new project. In this project i can use both an > XC9572-10 PC44 or XC9572-10 PC84. The resources used with the actual > program are: > Macrocells: 52 /72 ( 72%) > Product Terms: 313/360 ( 86%) > > To choose the package of the CPLD (44 or 84 pins) i must decide if i > can implement four octal three-state latch buffer ( 74HC573) in the > last free resurces of the IC or not. You cant. Each bit of the buffer needs one macrocell, so 3x8 bit means 24 macrocells. You got only 20 macrocells left. And you may need more macrocells, if you want to control the tristate buffers from outside, if you already generated the control signlas with the existing logic, you dont need a additional macrocell. -- MfG FalkArticle: 41904
I am not sure if this suggestion is going to save your design, but to get close to 32-bit 33MHz PCI's maximum theoretical bandwidth (133MB/s), you have to do initiator (bus master) transfers. If the microprocessor (CPU) is sinking data into your card, you will only get a fraction (probably 1/4 or 1/5) of the 133MB/s number. So, that German company's (Wasn't it Cesys or something like that?) PCI card is not going to do the job. I have heard about the card in the past, but when I replied to you original posting, I forgot about it, so I didn't bring it up. I have seen an image processing PCI card with 8 TI DSPs on board advertised on EE Times, so I guess for that card, 32-bit 33MHz PCI is somehow adequate. Kevin Brace (In general, don't respond to me directly, and respond within the newsgroup.)Article: 41905
i wish to initialize the block RAM on the virtex. I essentially wish to use then as a rom. I'm using foundation series.. how do i program the block RAM output values ...Article: 41906
My situation will be latter one although I don't get paid. I just don't want to waste time debugging a cable, so in order to save time and effort, $60 is worth the money. Kevin Brace (In general, don't respond to me directly, and respond within the newsgroup.) Falk Brunner wrote: > > > > Sure, but this depends on the situation. You said that this PCI thing is > just a hobby, not your daily paid work. So low cost is much more important > than time. On the other side it would be silly, wasting a engineer sitting > down for some hours for assembly/debugging a 60 $ piece of cable. Unless you > have a cheap wannabe engineer, aka intern . . .;-) > > -- > MfG > FalkArticle: 41907
Bert, I ran those test cases against Synplify 3.x a number of years ago and did not get 3 LUT's. Since, I long ago built my encode and decode libraries that is what I use for these common needs. On the subject of clarity that whole case structure reduces to: o = fnEncBin(a) ; I am sorry, but I see the case statement as something that has to be reverse-engineered to see the intent and includes the dreaded don't cares to achieve efficiency at the price of ambiguity, and a potential loss of portability. Please don't tell me that Doulos training recommends don't cares when that leads to pre/post synthesis mismatches. On the other hand, the function name says all, generates the most compact results possible independent of the tool, scales automatically, and always matches simulation before and after synthesis. Regards, "Bert Cuzeau" <bertrand.cuzeau@worldonline.fr> wrote in message news:b3e1d50a6000d751075e033cc34ea955.61171@mygate.mailgate.org... > Sweir, > > Thanks a lot for your input, but it worries me. > Which tool(s) did you find that had a problem with this ? > > I sure know of other ways of coding this, but this description > (hotdecod) is the clearest by far, + easy to understand and adapt : > * what if you have only 6 bits instead of 8 ? > * what if I need to customize the decoding completely ? > > What do you think is wrong with this description ? > It sure ends up as a very easy to reduce Karnaugh map... > It is not some weird corner case, it's the kind of code we > use all the time. > > When I teach HDLs (i.e. when I don't design :-), I keep > saying that RTL Synthesis tools main job is reducing the > combinational logic, and that they are very good at that. > The old logic reduction algorithms I knew of did converge > in this case (Espresso, Quine McCluskey...). > If some tools are having trouble here, the fix seems easy. > > If this trivial table is not reduced correctly, it scares me > for when I write large combinational functions... > > Another concern is that it could mean that some tool(s) would > not use the don't cares ('-') to simplify the logic ??? > That's definitely scary too... > > Bert. > > > "sweir" <weirsp@yahoo.com> wrote in message > news:7fVs8.203628$Yv2.67472@rwcrnsc54... > > > Bert, I've seen that test case throw off tools in the past. > > If you are really concerned about consistent QOR, I do not recommend > > coding this way. For discrete to encoded operations, index based functions > > provide consistent results across every tool I have seen. > > An example would be: > > > -------------------------------------------------------------------------- -- > > -- Encode a vector of discrete bits into a binary vector representation, > > -- WITHOUT guarding against multiple bits on. > > -- > > -- For a single bit on, the encoded value is the offset from the > > -- low index of src of the asserting bit, regardless of src's > > -- endianness. > -------------------------------------------------------------------------- -- > > function fnEncBin > > ( > > src : std_logic_vector > > ) return std_logic_vector is > > variable rslt : std_logic_vector( fnLog2M( src'length ) - 1 downto 0 ) ; > > begin > > rslt := ( rslt'range => '0' ) ; > > for rslt_idx in rslt'range loop > > for src_idx in src'length - 1 downto 0 loop > > if( ( ( src_idx / ( 2**rslt_idx ) ) MOD 2 ) = 1 ) then > > rslt( rslt_idx ) := rslt( rslt_idx ) OR src( src_idx + src'low ) ; > > end if ; > > end loop ; > > end loop ; > > return( rslt ) ; > > end ; > > > > Put in an eight bit vector, get 3) 4 input LUT's independent of the tool. > > > -- > Posted via Mailgate.ORG Server - http://www.Mailgate.ORGArticle: 41908
"sweir" <weirsp@yahoo.com> wrote: > > o = fnEncBin(a) ; > > the function name says all > fnEncBin says all? How about function declarations like: function to_unsigned(arg : in OneHot) return unsigned; function to_OneHot(arg : in unsigned) return OneHot; Where the OneHot type is an array of std_logic used to represent integers in one way and "unsigned" is an array of std_logic used to represent integers in a different way. Paul.Butler@ni.comArticle: 41909
I couldn't find any vendor for it, no. Frank "Steven Derrien" <sderrien@irisa.fr> wrote in message > Do you know if this board is commercially available ? It sounds to me that it is still a research prototype.Article: 41910
Due to the nature of my application, I am afraid I can't let the FPGA handle anything else but inner loop stuff. The rest is horribly complex and typically suited for a regular processor. But I'll look into it. Thanks for the advice everyone. Frank "Kevin Brace" <ihatespam99kevinbraceusenet@ihatespam99hotmail.com> wrote in message news:a921pk$4ot$2@newsreader.mailgate.org... > I am not sure if this suggestion is going to save your design, but to > get close to 32-bit 33MHz PCI's maximum theoretical bandwidth (133MB/s), > you have to do initiator (bus master) transfers.Article: 41911
Eric Crabill wrote: > > Hi Kevin, > > > I will be very interested if such a circuit (After all, PCILOGIC > > is just a tiny circuit with a few NAND gates.) can be patented. > > Or are you saying that the concept of CE (Clock Enable) was first > > patented by Xilinx? > > I am not saying either; I am talking about US6292020, which covers > "low skew programmable control routing for a programmable logic device" > which is certainly a rather obscure topic, but probably relevant to > this discussion. > Eric, I have done patent search on patents assigned to Xilinx in the past, but I totally missed yours . . . The essence of your patent seems to be that a small logic block on each side of the chip can control all the IOBs of that side. I only saw part of the patent because Delphion no longer lets me see the actual image of the patent without paying them. I also tried USPTO website, but I didn't have the correct plug-in installed in my computer, so I couldn't see all the images. I guess I will try a computer at a library later. However, I find it interesting that PCILOGIC existed from the first Virtex which was released in 1998, but the PCILOGIC patent (Or the general concept of the patent.) wasn't filed until August 2000. Why the delay, although the patent didn't seemed to get stuck at USPTO for years? Anyhow, I will call US Patent 6,292,020, "the PCILOGIC patent", and will be mentioned in a FPGA FAQ about PCILOGIC I am writing right now. > > However, the delay of unregistered paths going through it seems large. > > (Tpcilog ~= 1.6ns for IRDY and TRDY in XC2S150-5). > > I think you are not considering that this number includes both the logic > delay (equivalent to a LUT + MUXF5) and the input routing delay on the > IRDY# and TRDY# signals. It is respectable... The real advantage is > the > dedicated routing of the output net. That is what saves time. > I didn't realize until yesterday, but when IRDY# or TRDY# go through PCILOGIC, their input delay name won't be called Tiopi, but Tiopci instead. In a XC2S150-6CPQ208, Tiopi is 0.664ns, but Tiopci is 0.538ns, so that helps the PCI_CE (Clock Enable) line of the chip. You are right that the Tpcilog I am talking about includes routing delay from IRDY or TRDY to PCILOGIC and the gate delay through it because the gate delay between the pin and PCILOGIC is always 0ns. You are also right that Tpcilog of 1.352 ns in a XC2S150-6CPQ208 is probably going to be better than the routing delay to a 5-input LUT (XST doesn't seem to infer a 5-input LUT for my PCILOGIC emulation logic, so it gets broken into several 4-input LUTs, making the matter worse . . . ) placed right next to the pin. Once I compared the routing delay between PCILOGIC or the emulated one to AD's CE input, and the PCILOGIC's routing delay was far less than the emulated one. However, the thing I was disappointed was, why is the gate delay through one NOT gate (Can be a NAND gate acting as an inverter), one 2-input NAND gate, and one 3-input NAND gate with the routing delay from the pin to PCILOGIC is 1.352 ns? (NAND_IRDY = !(!IRDY * !I1), NAND_TRDY = !(!TRDY * !I3), PCI_CE = !(!I2 * NAND_IRDY * NAND_TRDY)) Why is the delay through those NAND gates that large, and is that normal for a chip fabricated in a 0.18/0.22u process? > Regarding the gclkdel option: > > > When you say, "you can experimentally determine what it does.," do you > > mean like I have to put some kind value, and see if the PCI card will > > crash to determine the approximate delay it inserts? > > No, I mean you can make a simple test design to measure the clock to out > of a flip flop, place and route it, then generate 31 bitstreams (each of > the valid gclkdel options). Take all 31 into the lab, and measure the > clock to out. The differences you observe will likely be related to the > use of that option. At room temperature, at nominal VCC, on that one > device you happen to be measuring. > Okay, you are right, there are always ways to figure something out a fairly simple way. However, the lack of oscilloscope will prevent me from experimenting with it. Won't I need a few GHz oscilloscope to accurately observe the delay? > > Also, the delay added by using /Gclkdel doesn't get reflected during > > static timing analysis. Why isn't the delay added before static timing > > analysis? > > Probably because this "feature" is not intended for general use (you may > re-read my boilerplate about "unsupported, undocumented" but is intended > for use with the Xilinx PCI core for 66 MHz designs, and in that context > only. > I guess I was right (obvious) that /Gclkdel's delay information won't be reflected during static timing analysis. I will guess again that if /Gclkdel information was used to calculate setup/hold/output time during a static timing analysis, it won't be a secret at all just like the way some people figured it out how to obtain a simulation model of PCILOGIC. I am aware of the "unsupported and undocumented" nature of these features though. Despite being unsupported, yesterday I fired up my PCI IP core with PCILOGIC, and it worked perfectly fine in two computers I tested it. (Intel chipset and SiS chipset.) Although all the testing I did was with single cycle I/O and Configuration cycles, and not burst memory cycles where the PCILOGIC shines. So, I guess PCILOGIC works just like the simulation model. > > As an alternative to /Gclkdel, I have come up with an idea of > > tying two adjacent GCLKBUF to create some extra global clock buffer > > delay. How does this approach compared to /Gclkdel option, and is > > it more desirable than /Gclkdel option? Tying up two GCLKBUF creates > > about 1.0ns of extra delay. > > Your approach is a valid way to add delay, but have you considered: > I wasn't sure if my approach was a correct one, but if you are going to say it is a valid way to add delay, I feel a little better. > 1. What it does to the clock to out performance? When only one GCLKBUF is used, the global clock buffer delay of a XC2S150-6CPQ208 is about 1.5ns. Although the 1.5ns number can change depending on how many FFs are actually being used, and when a different chip is used (Will get larger if a bigger chip is used.). When I tie two adjacent GCLKBUFs together (In my case, let the clock signal enter GCLKPAD3, go through GCLKBUF3, and then to GCLKBUF2) that creates about 1.0ns of extra global clock buffer delay. Fortunately, Spartan-II-6's PCI66_3 output buffer is very fast, so when only one GCLKBUF is used, the worst Clock-to-Output delay (Tco or Tval) is about 4.6ns. When two GLCKBUFs are tied together to get the extra 1.0ns of delay, Tval is still about 5.6ns. Since 66MHz PCI's Tval is < 6ns, I can still give up another 0.4ns in theory to help the setup time, but the clock delay is not really adjustable, so I will be happy with whatever I get. (I got 1.0ns of extra time by tying two GCLKBUFs, but I prefer getting a little more. Additional 0.3ns will help a lot, but it won't happen unless I use a bigger chip.) When going from XC2S150-6CPQ208 to XC2S200-6CPQ208, I saw about 0.5ns of increase in global clock delay, because the chip size got larger. (More loading on the global clock line.) If the current trend continues, at XCV1000 (The largest FPGA that can handle 5V PCI.), Tval can be about 5.8ns or 5.9ns, barely meeting the Tval < 6ns requirement, but that is only my guess because I don't have ISE Foundation. So, my own theory is that a bigger chip is probably easier to meet 66MHz PCI timings than a smaller one. (In a typical design, a smaller chip runs faster than a bigger chip, but 66MHz PCI seems like an exception.) I can only imagine what goes on at Xilinx, but whoever worked on the IOB output part must have worked really hard to keep Clock-to-Ouput really low, so that the Tco margin (6.0ns - 4.6ns = 1.4ns) can be given up to help meet 66MHz PCI's stringent Tsu < 3ns requirement. > 2. What it does to the input hold (0 ns) requirements? > > It may fix your input setup problems, but break something else... > > Eric I believe I already got hold time issue under control. IOB input FFs' programmable delay seem to be more than two GCLKBUFs + clock distribution delay. Therefore hold time won't be an issue there. However, I cannot always rely on those input FFs for some signal paths, and in that case, I have to place some FFs far away from the pin to create routing delay. So, my "tie two GCLKBUFs together" scheme doesn't cause problems regarding Tco and Th (Hold Time). fmax is also not a big issue either. Therefore, Tsu < 3ns is pretty much the only problem here. If I calculate the Tsu I got, I have 3.0ns (Tsu of 66MHz PCI.) + 1.5ns (The normal clock distribution delay.) + 1.0ns of extra global clock buffer delay. However, even with 5.5ns to 5.6ns of total Tsu, meeting that number is very hard in a XC2S150-6CPQ208. The key to meeting the total Tsu of 5.6ns seems to keep the levels of 4-input LUTs to below certain number and using floorplanner to group relevant LUTs together within a CLB. Yes, I did that, but . . . I still got 19 paths not meeting the requirement. The best timing score I got so far was 4,909, so I am close, but the design cannot seem to make it . . . Eric, you probably picked the pin out for XC2S150-6CPQ208 so you probably know what I am talking about, but how come REQ# and GNT# pins not placed near the rest of the control signals? (i.e., FRAME#, IRDY#, DEVSEL#, TRDY#, STOP#, and PAR. Around P23 through P34.) That choice seems okay for 33MHz PCI, but not for 66MHz PCI. (Okay, I guess you can say no one intended to do 66MHz PCI in a PQ208 package.) REQ#'s outcome depends on FRAME# and IRDY#, and those two signals have to travel very long distances, therefore, there is virtually no chance meeting the timing requirements of 66MHz PCI. I guess I will have to use an FG456 version of Spartan-II to have any chance (even a very slim chance) to meet 66MHz PCI's Tsu because that one will likely have a better chance of REQ# and GNT# pins being close to rest of the control pins But I haven't tried that out yet because I cannot seem to get the Xilinx LogiCORE PCI pin out for Spartan-II FG456 package without paying something. Kevin Brace (In general, don't respond to me directly, and respond within the newsgroup.)Article: 41912
I am using 4.2WP0.0, too, but I haven't had the problem you had. How about reinstalling the software? Don't forget to get some junk out of the registry when you are doing so. Kevin Brace (In general, don't respond to me directly, and respond within the newsgroup.)Article: 41913
Hi, I am trying to seek clarification on which cable (MultiLINX or Parallel Cable III) is required in order to use ChipScope ILA. I looked through the ChipScope manual (UG005 / PN 0401884 (v4.2) March 22, 2002) and found the following excerpt. "The ChipScope Analyzer tool can use either the MultiLINX, Parallel Cable III, or Parallel Cable IV cables to communicate with the target devices in the Boundary Scan chain of the board-under-test." So that sounds like I can use either. But then later on it says the following, "The ChipScope Analyzer supports the Xilinx MultiLINX™, Parallel Cable III and Parallel Cable IV download cables for communication between the PC and FPGA(s). The MultiLINX cable supports both USB (Windows 98 and Windows 2000) and RS-232 serial communication from the PC (see Figure 1-1, page 2). The Parallel Cable III and Parallel Cable IV cables support only parallel port communication from the PC to the Boundary Scan chain." Which sounds like the parallel cable only supports communication in the PC -> FPGA direction, so how does the captured data get sent back to the PC? Are there any limitations (apart from speed) in using the parallel cable rather than the multilinx cable? Regards AndrewArticle: 41914
Paul, why are you comparing a declaration with instantiations? Personally, I prefer simple names and appropriate comments in the source code. We appear to agree on the importance of clear declarations. Sample declarations of encode and decode functions: ------------------------------------------------------------------------ ---- -- Expand a binary coded vector into 2^length individual bits. -- -- The return vector is big-endian. ------------------------------------------------------------------------ ---- function fnDecBin ( src : std_logic_vector ) return std_logic_vector ; ------------------------------------------------------------------------ ---- -- Encode a vector of discrete bits into a binary vector representation, -- WITHOUT guarding against multiple bits on. -- -- For a single bit on, the encoded value is the offset from the -- low index of src of the asserting bit, regardless of src's -- endianness. ------------------------------------------------------------------------ ---- function fnEncBin ( src : std_logic_vector ) return std_logic_vector ; ------------------------------------------------------------------------ ---- -- Encode a vector of discrete bits into a binary vector representation -- where the highest asserting index of src is the ONLY bit which -- contributes to the output result. -- -- The encoded value is the offset from the low index of src of the -- most significant asserting bit in src. ------------------------------------------------------------------------ ---- function fnEncBinPri ( src : std_logic_vector ) return std_logic_vector ; I VHDL overloading very useful and so try to avoid building type names into functions other than conversions. Conversion functions that I have do use type names: --------------------------------------------------------------------------- - -- fnToBit -- Convert an integer to a bit ------------------------------------------------------------------------ ---- function fnToBit ( src : integer ) return bit ; --------------------------------------------------------------------------- - -- fnToBit -- Convert an std_logic to a bit ------------------------------------------------------------------------ ---- function fnToBit ( src : std_logic ) return bit ; --------------------------------------------------------------------------- - -- fnToBitV -- Convert an integer to bit_vector --------------------------------------------------------------------------- - function fnToBitV ( src : integer ; w : integer ) return bit_vector ; --------------------------------------------------------------------------- - -- fnToBitV -- Convert a std_logic_vector to a bit vector --------------------------------------------------------------------------- - function fnToBitV ( src : std_logic_vector ) return bit_vector ; --------------------------------------------------------------------------- - -- fnToBitVector -- Convert a std_logic_vector to a bit vector --------------------------------------------------------------------------- - function fnToBitVector ( src : std_logic_vector ) return bit_vector ; --------------------------------------------------------------------------- - -- fnToSigned -- Convert a std_logic_vector to a signed vector. --------------------------------------------------------------------------- - function fnToSigned ( src : std_logic_vector ) return signed ; --------------------------------------------------------------------------- - -- fnToSINT -- Convert a std_logic_vector to a signed integer --------------------------------------------------------------------------- - function fnToSINT ( src : std_logic_vector ) return integer ; --------------------------------------------------------------------------- - -- fnToSl -- Convert an integer to a std_logic ------------------------------------------------------------------------ ---- function fnToSl ( src : integer ) return std_logic ; --------------------------------------------------------------------------- - -- fnToSl -- Convert a bit to a std_logic ------------------------------------------------------------------------ ---- function fnToSl ( src : bit ) return std_logic ; --------------------------------------------------------------------------- - -- fnToSLV -- Convert a bit_vector to std_logic_vector --------------------------------------------------------------------------- - function fnToSLV ( src : bit_vector ) return std_logic_vector ; --------------------------------------------------------------------------- - -- fnToSLV -- Convert an integer to std_logic_vector --------------------------------------------------------------------------- - function fnToSLV ( src : integer ; w : integer ) return std_logic_vector ; --------------------------------------------------------------------------- - -- fnToSlv -- Convert a signed to std_logic_vector --------------------------------------------------------------------------- - function fnToSlv ( src : signed ) return std_logic_vector ; --------------------------------------------------------------------------- - -- fnToSLV -- Convert a std_logic to single bit std_logic_vector --------------------------------------------------------------------------- - function fnToSLV ( src : std_logic ; w : integer ) return std_logic_vector ; --------------------------------------------------------------------------- - -- fnToSLV -- Convert an unsigned to std_logic_vector --------------------------------------------------------------------------- - function fnToSLV ( src : unsigned ) return std_logic_vector ; --------------------------------------------------------------------------- - -- TO_STDLOGICVECTOR -- Convert a signed to std_logic_vector --------------------------------------------------------------------------- - function TO_STDLOGICVECTOR ( src : signed ) return std_logic_vector ; --------------------------------------------------------------------------- - -- TO_STDLOGICVECTOR -- Convert an unsigned to std_logic_vector --------------------------------------------------------------------------- - function TO_STDLOGICVECTOR ( src : unsigned ) return std_logic_vector ; --------------------------------------------------------------------------- - -- fnToUint -- Convert a bit_vector to an unsigned integer --------------------------------------------------------------------------- - function fnToUint ( src : bit_vector ) return integer ; --------------------------------------------------------------------------- - -- fnToUint -- Convert a std_logic_vector to an unsigned integer --------------------------------------------------------------------------- - function fnToUint ( src : std_logic_vector ) return integer ; --------------------------------------------------------------------------- - -- TO_UNSIGNED -- Convert a std_logic_vector to unsigned --------------------------------------------------------------------------- - function TO_UNSIGNED ( src : std_logic_vector ) return unsigned ; Regards, "Paul Butler" <Paul.Butler@ni.com> wrote in message news:ub9db5fqv26a5f@corp.supernews.com... > > "sweir" <weirsp@yahoo.com> wrote: > > > > o = fnEncBin(a) ; > > > > the function name says all > > > > fnEncBin says all? How about function declarations like: > > function to_unsigned(arg : in OneHot) return unsigned; > function to_OneHot(arg : in unsigned) return OneHot; > > Where the OneHot type is an array of std_logic used to represent integers in > one way and "unsigned" is an array of std_logic used to represent integers > in a different way. > > Paul.Butler@ni.com > > > >Article: 41915
Hi, > The essence of your patent seems to be that a small logic block > on each side of the chip can control all the IOBs of that side. I think it is more about low skew distribution of control signals along the sides of the programmable array. As you have no doubt figured out, in bus interfaces like PCI, the I/O timing is the tough part and what makes it even more difficult is that there are both minimum (hold) and maximum (setup) delays. The timing needs to fall in a certain "window" for it to be correct. > Why the delay, although the patent didn't seemed to get stuck > at USPTO for years? There is some "time limit" between public disclosure and when you can no longer file. I'm not a laywer. However, I did check this with our legal department before bothering to file. > I didn't realize until yesterday, but when IRDY# or TRDY# go > through PCILOGIC, their input delay name won't be called Tiopi, > but Tiopci instead. I think you mentioned that you don't have FPGA Editor, but if you were to look at these special IOBs, you would notice that they are different from most other IOBs. They are called PCIIOBs, and are used for the special pins on the left and right sides. They have a "direct" output that goes to the PCILOGIC without going through any switch boxes. > Why is the delay through those NAND gates that large, and is > that normal for a chip fabricated in a 0.18/0.22u process? You need to be aware of the difference between the actual silicon and a model of the silicon. All of the stuff you see in the tools, and the speedfiles, is a software model of the silicon. From a modeling point of view, does it matter if: Tpcilogic = 2 ns pci_ce route = 2 ns -or- Tpcilogic = 1 ns pci_ce route = 3 ns The answer is no, it does not, if you cannot use these separately. The sum of the path is what is important. The individual timing parameters don't make a difference in this case. Another thing to consider is people building the models may elect to simplify the model if it makes sense. For example, those I1, I2, I3 inputs. I doubt they all have the same propagation delay to the output of the PCILOGIC block. But they are probably modeled that way, using one "worst case" delay. > Won't I need a few GHz oscilloscope to accurately observe the > delay? My example was only that. You can craft all sorts of test patterns, using multiple global buffers, which could exhibit four times the delay (might be easier to measure). > So, my own theory is that a bigger chip is probably easier to meet > 66MHz PCI timings than a smaller one. (In a typical design, a > smaller chip runs faster than a bigger chip, but 66MHz PCI seems > like an exception.) As the array size gets bigger: * setup gets "easier" to meet * hold gets "harder" to meet * clock to out gets "harder" to meet > I believe I already got hold time issue under control. IOB > input FFs' programmable delay seem to be more than two GCLKBUFs > + clock distribution delay. Therefore hold time won't be an > issue there. The Virtex/Spartan-II datasheet guarantees zero hold time for input flip flops in IOBs only under the condition that the input delay buffer is enabled, and that you are using one global buffer. If you do anything else, that guarantee goes out the window... The only other thing I'm aware of which quotes hold times is the "pin to pin" datasheet style report that comes out of the timing analyzer. You may want to check that. > However, I cannot always rely on those input FFs for some > signal paths, and in that case, I have to place some FFs > far away from the pin to create routing delay. Yes, this is a general problem and you need to account for it when you design a PCI interface. > but how come REQ# and GNT# pins not placed near the rest > of the control signals? (i.e., FRAME#, IRDY#, DEVSEL#, > TRDY#, STOP#, and PAR. Around P23 through P34.) > That choice seems okay for 33MHz PCI, but not for 66MHz > PCI. (Okay, I guess you can say no one intended to do > 66MHz PCI in a PQ208 package.) There is a two way tradeoff (at least) when you are picking a pinout: 1. Pin ordering to match external edge connector 2. Pin ordering to maximize internal performance It's my guess that whoever picked the pinout elected to optimize this pinout for #1, probably thinking that it was not going to be used for a 66 MHz design. Hope that helps, EricArticle: 41916
I don't know if they support it or not but I've been able to do readback using the parallel cable III. Steve > Which sounds like the parallel cable only supports communication in > the PC -> FPGA direction, so how does the captured data get sent back > to the PC?Article: 41917
There is a one year time limit between the time 1) it is disclosed publicly or 2) offered for sale. I pretty sure you have to sign some papers to the effect. You should really watch out about this I just read where the inventor of the blue led said in court that "he told some lies in his patent application" and now he might be put on trial for that (either Nature or Science this week). Steve -- from 35 USC - Patent Laws uspto-- CHAPTER 10 - PATENTABILITY OF INVENTIONS Sec. 100 Definitions. (b)the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country,more than one year prior to the date of the application for patent in the United States,or > There is some "time limit" between public disclosure and when you > can no longer file. I'm not a laywer. However, I did check this > with our legal department before bothering to file.Article: 41918
On 10 Apr 2002 12:37:51 -0700, 13378988@qub.ac.uk (almost_a_gnome) wrote: >i wish to initialize the block RAM on the virtex. > >I essentially wish to use then as a rom. > >I'm using foundation series.. > >how do i program the block RAM output values ... I think you will find the guidance you need here: http://www.fpga-faq.com/FAQ_Pages/0031_How_to_initialize_Block_RAM.htm Philip Freidin FliptronicsArticle: 41919
"Steve Casselman" <sc.nospam@vcc.com> wrote in message news:<_AEs8.1784$2_6.924707050@newssvr14.news.prodigy.com>... > I hate to keep saying this but it seems people with patents get no respect. > If you read my patent > http://www.delphion.com/details?pn=US06178494__ You will see that the > Pilchard board falls under that patent. I explicitly site Rams of all kinds. > Remember the patent issued in 2000 I put it in 11/96 and have all of it in > my note book from 12/94. The people from Honk Kong are doing great work but > ..... > > Steve Casselman, CEO > Virtual Computer Corporation > > > >invented in Hong Kong, was available for Windows. The Pilchard is a FPGA > > board that plugs into a DIMM slot. So you've patented the idea of hanging a chip off of a memory bus? There's plenty of prior art for that - and as far as I can tell that's all that the Pilchard board does. Putting any chip on a memory bus is obvious to electrical engineers, so what makes your idea non-obvious? What about Nuron? Intel bought them, so they have deep pockets. Did they license your patent? If not then are you suing Intel?Article: 41920
Could someone please give a hint how to instantiate built in multipliers that Xilinx claims are available in Virtex 2000E? I need to exploit them in my VHDL design. should I use something like primitives or components from Xilinx Library? but they seem to be lower level functions like multiplexers, registers, comparators,... Thanks, Max EdmandArticle: 41921
Hi Andrew, You can use the Parallel Cable for Chipscope. I use it to demonstrate the functionality of the tool and nearly all of my customers use it instead of MultiLinx. It supports the transfer of data in both directions PC -> FPGA and FPGA -> PC. As you said the only difference is speed. Thanks Jason andrew.bridger@paragon.co.nz (Andrew Bridger) wrote in message news:<7939158e.0204101434.3c7bb138@posting.google.com>... > Hi, > I am trying to seek clarification on which cable (MultiLINX or > Parallel Cable III) is required in order to use ChipScope ILA. I > looked through the ChipScope manual (UG005 / PN 0401884 (v4.2) March > 22, 2002) and found the following excerpt. > > "The ChipScope Analyzer tool can use either the MultiLINX, Parallel > Cable III, or Parallel Cable IV cables to communicate with the target > devices in the Boundary Scan chain of the board-under-test." > > So that sounds like I can use either. But then later on it says the > following, > > "The ChipScope Analyzer supports the Xilinx MultiLINX™, Parallel Cable > III and Parallel Cable IV download cables for communication between > the PC and FPGA(s). The MultiLINX cable supports both USB (Windows 98 > and Windows 2000) and RS-232 serial communication from the PC (see > Figure 1-1, page 2). The Parallel Cable III and Parallel Cable IV > cables support only parallel port communication from the PC to the > Boundary Scan chain." > > Which sounds like the parallel cable only supports communication in > the PC -> FPGA direction, so how does the captured data get sent back > to the PC? > > Are there any limitations (apart from speed) in using the parallel > cable rather than the multilinx cable? > > Regards > AndrewArticle: 41922
The approach to synchronizing things really depends on the nature of the signal. If the signal changes slowly relative to the destination clock (less than half the frequency) you can generally run it through two flops clocked by the destination clock. always @ (posedge dest_clk) // level synchronizer begin meta <= sig_slow; sync <= meta; end If the signal is a pulse, only one source clock wide, but with pulses far apart, you can put in a little pulse-stretcher and then a level-synchronizer in the destination clock domain. If the input data is a bus or changes too fast for either of the previous methods, then a more complicated solution is needed, usually a FIFO. The input side of the FIFO is clocked by the source clock and the output side of the FIFO is clocked by the destination clock. You can build small FIFOs out fo flops but for bigger ones you'll need a two-ported RAM. CP In article <325691ba.0204030158.700d46cf@posting.google.com>, Amit Deshpande <amitvlsi@hotmail.com> wrote: > Actually I wanted to synchronize two inputs which are continuously > changing on different clocks. > e.g. say inputA changing on clkA and inputB changing on clkB. and is > it is not fixed that wheather clkA is faster or clkB.It can be either > way. > So can you plz suggest me, is there any method by which I can do it? > > Thanks and regards, > AmitArticle: 41923
Instantiate MULT18X18S. Some synthesizers might be able to just infer them (with '*') but I've been instantiating them. Make sure you have the most current data sheet for multiplier timing. (They've been getting progressively slower.) "Max Edmand" <maxedman3503@yahoo.com> wrote in message news:3a30996f.0204101958.2df0945c@posting.google.com... > Could someone please give a hint how to instantiate built in > multipliers that Xilinx claims are available in Virtex 2000E? > > I need to exploit them in my VHDL design. should I use something > like primitives or components from Xilinx Library? but they seem > to be lower level functions like multiplexers, registers, > comparators,... > > Thanks, > Max EdmandArticle: 41924
Hi - We're currently using ChipScope in the lab, to debug some Virtex II parts. The functionality is impressive, but the speed is somewhat lacking. When we try to use ChipScope logic in parts running at 125 MHz, we can't meet timing. Simplifying the trigger conditions helps somewhat, but not enough. Does anyone who's worked with ChipScope have some tips for speeding it up? Other than not looking at the timing reports, that is... Thanks, Bob Perlman
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z