Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
I learnt that it's fairly easy to download an FPGA with a microprocessor, from the datasheet of a PROM. My question is, if the bit-stream file is stored in a Window formatted disk. My uP can only read out the bitstream byte by byte. How do I associate these bytes with the serial bit-stream we see in the EPROM's datasheet? Kelvin.Article: 56801
hello everybody, i am looking for a good research topic for a study invlovng FPGA. I need to know the current research trends with regards to implementation on FPGA. something like implementation of a protocol or an algorithm in wireless/DSP or basically any useful implementation would be a good idea to begin with... thanks in advance nikhilArticle: 56802
Looks like your trying to download a bit-file to a PROM (xc18v04). What I know from the toolchain using a PowerPC is that you generate the bit-file with project navigator, then import that in Platform Studio again to include the embedded software (assuming you store the software in BRAM-blocks) using 'update bitstream'. The resulting bitstream can be converted with Impact to a mcs-file to be programmed in the PROM. Regards, Rienk Rgr wrote: > Hi NG. I am using ISE5.2i and EDK 3.2 and tries to implement a > MicroBlaze-design. > > The funny thing is that my design synthesizes fine, and I can even download > it to my FPGA board (Virtex II) using the project navigator, the problem is > that my written C-code does not get implented as well, so I have to download > from the EDK software. > When I do I get the error bellow. Anyone have an idea on what is wrong? As > mentioned I can perfectly download a design to my board using the project > navigator, but not with the EDK software, so I do not think it's my cable > causing trouble :-( > > > Command bash -c "cd /xygdrive/c/EDKproj1/; make -f system.make download; > exit;" Started... > ********************************************* > Downloading Bitstream onto the target board > ********************************************* > impact -batch etc/download.cmd > // *** BATCH CMD : setMode -bs > // *** BATCH CMD : setCable -port lpt0 -baud 9600 > Connecting to cable (Parallel Port - lpt1). > Checking cable driver. > Driver windrvr.sys version = 5.0.5.1. LPT base address = 0378h. > Cable connection established. > INFO:iMPACT:501 - '1': Added Device UNKNOWN successfully. > ---------------------------------------------------------------------- > INFO:iMPACT:1366 - > Reading etc\xc18v04_vq44.bsd... > INFO:iMPACT:1366 - > Reading C:/Xilinx/xc18v00/data\xc18v04_vq44.bsd... > INFO:iMPACT:501 - '1': Added Device XC18V04_VQ44 successfully. > ---------------------------------------------------------------------- > ---------------------------------------------------------------------- > ---------------------------------------------------------------------- > ---------------------------------------------------------------------- > INFO:iMPACT:1366 - > Reading etc\xc18v04_vq44.bsd... > ---------------------------------------------------------------------- > // *** BATCH CMD : addDevice -position 1 -part etc/xc18v04_vq44.bsd > // *** BATCH CMD : setAttribute -position 1 -attr configFileName -value > etc/xc18v04_vq44.bsd > INFO:iMPACT:1366 - > Reading etc\xc18v04_vq44.bsd... > INFO:iMPACT:1366 - > Reading C:/Xilinx/xc18v00/data\xc18v04_vq44.bsd... > INFO:iMPACT:501 - '1': Added Device XC18V04_VQ44 successfully. > ---------------------------------------------------------------------- > ---------------------------------------------------------------------- > ---------------------------------------------------------------------- > ---------------------------------------------------------------------- > INFO:iMPACT:1366 - > Reading etc\xc18v04_vq44.bsd... > '2': Loading file 'implementation/download.bit' ... > done. > INFO:iMPACT:1366 - > Reading C:/Xilinx/virtex2/data\xc2v1000.bsd... > INFO:iMPACT:501 - '2': Added Device xc2v1000 successfully. > ---------------------------------------------------------------------- > ---------------------------------------------------------------------- > // *** BATCH CMD : addDevice -position 2 -file implementation/download.bit > // *** BATCH CMD : program -p 2 > Validating chain... > INFO:iMPACT:1209 - Testing for '0' at position 6.The Instruction capture of > the > device 1 does not match expected capture. > INFO:iMPACT:1206 - Instruction Capture = '11111111110101' > INFO:iMPACT:1207 - Expected Capture = '000XXX01XXXX01' > ERROR:iMPACT:1210 - '1':Boundary-scan chain test failed at bit position '1'. > A problem may exist in the hardware configuration. > Check that the cable, scan chain, and power connections are intact, > that the specified scan chain configuration matches the actual hardware, and > that the power supply is adequate and delivering the correct voltage. > ---------------------------------------------------------------------- > ---------------------------------------------------------------------- > Done. > Command bash -c "cd /xygdrive/c/EDKproj1/; make -f system.make init_bram; > exit;" Started... > make: Nothing to be done for `init_bram'. > Done. > > -- --- Rienk van der Scheer FPGA engineer 3T BV, http://www.3t.nl/Article: 56803
Does anyone know whether a spartan 2e DLL can lock to a video clock and multiply it by 2. The video is from a PAL/NTSC camera about 3 meters away so I'm hoping that the video decoder will output a very stable clock.Article: 56804
"Basuki Endah Priyanto" <EBEPriyanto@ntu.edu.sg> wrote in message news:<F9gz9T8MDHA.2420@exchnews1.main.ntu.edu.sg>... > I have the following error message in mapping stage. Anybody know how to > encounter this error messages ? > > ERROR:MapLib:32 - lut4 l symbol "iRD/iCMA0" (output > signal=iRD/iCMA0/O) > has an > equation that uses an input pin connected to a trimmed signal. Make > sure that > all the pins used in the equation for this LUT have signals that are > not > trimmed (see trim report for details on which signals were trimmed). > > Thanks. > > Basuki Keren > > Ps: I am using ISE 5.1i I received that message in ISE 4.2, 5.2. Solution: Go to Implement Design right click on properties select Map properties disable TRIM UNCONNECTED SIGNALS (REMOVE CHECK) Close properties Then re-run Implement Design The design will pass MAP. Report the problem to Xilinx as a web case. Bill HannaArticle: 56805
"U. Hernandez" wrote: > > Hi guys, > > Thanks all for the answers, very appreciated. When I say "image" I am > talking about "design" (I can't believe this is new therminology for FPGA > gurus :O) Whether you believe it or not, it is unconventional terminology to me, and I try to avoid additional confusion by using (or even accepting) strange words... <snip> > But I guess that you agree that is a lot for being just "standby". > I agree, but there is nothing you can do about it. It's the price of progress. You can of course buy older, slower, and more expensive parts. And in particular situations that can be a wise move... > > So my next question is, what will happen with Spartan 3? Smaller technology > but what about the transistor threshold voltage, has it been reduced > proportionally? > > Spartan3 benefits from its smaller size ( the transistor count is not as high) and from the fact that speed was not the prime objective. There also is a wide spread in idle current, which means that many parts will have less than 20% of the specified value... I don't like to be the harbinger of bad news, but these are industry-wide facts, and the user community has to face them, even though it hurts. (Have you heard of 80 W in a 3 GHz Pentium, and many watts of idle current?) This is not your father's CMOS anymore... Peter AlfkeArticle: 56806
The 3S50, 3S200 and 3S400 will be included in the 6.1i WebPACK (coming this Fall). John wrote: >Hello, >Does anyone know when Spartan3 devices other than the 50K will be >supported by ISE WebPack. Also, does anybody know what the largest >Spartan3 supported by the WebPack tools will be? > >Thanks > >Article: 56807
Depends on the frequency. The output frequency must be higher than 24 MHz. Peter Alfke rob d wrote: > > Does anyone know whether a spartan 2e DLL can lock to a video clock > and multiply it by 2. The video is from a PAL/NTSC camera about 3 > meters away so I'm hoping that the video decoder will output a very > stable clock.Article: 56808
"Kelvin @ Clementi" <cobraxu@singnet.com.sg> schrieb im Newsbeitrag news:bckelg$sm9$1@reader01.singnet.com.sg... > I learnt that it's fairly easy to download an FPGA with a microprocessor, > from the > datasheet of a PROM. My question is, if the bit-stream file is stored in a > Window > formatted disk. My uP can only read out the bitstream byte by byte. > How do I associate these bytes with the serial bit-stream we see in the > EPROM's > datasheet? Just bit-bang them out of a port. Each bit gets its clock pulse, done. Just make sure you dont mix up LSB/MSB, which happens often. Also dont forget to write some (more than 6, better make it 8 or more) dummy bits after all configuration bits have been shifted out, to start the FPGA. If you then made sure to select CCLK as a startup clock when generating the bitfile, everything will be fine. -- Regards FalkArticle: 56809
"rob d" <rjd@transtech-dsp.com> schrieb im Newsbeitrag news:e44f5c31.0306160647.530d6b28@posting.google.com... > Does anyone know whether a spartan 2e DLL can lock to a video clock > and multiply it by 2. The video is from a PAL/NTSC camera about 3 > meters away so I'm hoping that the video decoder will output a very > stable clock. ??? You mena you have a (wirelass) video receiver, which outputs a video clock of 27 (??) MHz. A DLL can lock to a 27 Mhz clock signal, if the jitter requirements are met, means cycly to cycle jitter less than 1ns (?) and long term frequency drift also low (how low??) -- Regards FalkArticle: 56810
Peter, Alas, to do DVI (or OpenLDI) you need to be able to keep the serial bits of the 3-4 pairs to within a fraction of a bit time skew (there's one common reference clock for all pairs). The MGT can't do that; multiple pairs are only guaranteed to a character or so. I haven't implemented it, though I've pencilled it into a couple designs. The app note you reference in another post (the 840Mb/s serdes) is what I'm basing it on (it wasn't lost on me that the appendix in that app note happens to be OpenLDI). BTW, it looks to me like LVDS can drive a TMDS receiver by simply including some series resistors (220 ohm is what I have in my notes). I have NOT even simulated this, however, much less built it. Ken Ryan Smiths AerospaceArticle: 56811
First, which do you want, a Mux-DFF, or a DFF with enable (which is what you describe in the code - sort of...) As you have described it, you have asked the synthesis tool to infer a 2-1 MUX and then connect the output to a FF, however if you had instead coded (I will use Verilog, but it is the same...) always @(posedge clk) begin if (se) begin d <= sin end end what you have encoded is a DFF with enable. All Flip-Flops in the Virtex architectures have enables, and hence this can be implemented without any LUTs. A good synthesis tool SHOULD be able to realize that a 2-1 MUX connected to the D of a flop, with one of the two inputs being the Q of the same flop can be converted to a FF with enable, but it might not always do so... Avrum "Valli" <sri_valli_design@hotmail.com> wrote in message news:d9acfecb.0306160339.357f16db@posting.google.com... > Hi, > > Is there any way to implement Mux-DFF Logic( D-flop with 2-1 Mux at > the Data line, .. like scan flop.) "without" using the LUTs(for the > Mux logic..)in the CLBs for Virtex series FPGAs ? > > Code: > ----------------------------- > library ieee; > use ieee.std_logic_1164.all; > > entity rtl is > port(clk,d,se,sin: in std_logic; > y: out std_logic); > end rtl; > > architecture arch of rtl is > begin > > process(clk,se,sin,d) > variable new_d : std_logic; > > begin > > if(se='1') then > new_d := sin; > else > new_d :=d; > end if; > > If(clk'event and clk='1') then > y <= new_d; > end if; > > end process; > > end arch; > -------------------------- > > Thanks, > Valli.Article: 56812
Kenneth, since the MGTs are so much faster than needed, can we send the 3 or 4 bits serially and use an external ECL shiftregister ( either at the source or at the destination) to convert them back to parallel ? Just a thought. I hate to give up. :-) I will also consult the real MGT gurus here next to me, whether there is a more elegant way. Peter Alfke Kenneth Ryan wrote: > > Peter, > > Alas, to do DVI (or OpenLDI) you need to be able to keep the serial > bits of the 3-4 pairs to within a fraction of a bit time skew (there's > one common reference clock for all pairs). The MGT can't do that; > multiple pairs are only guaranteed to a character or so. > > I haven't implemented it, though I've pencilled it into a couple > designs. The app note you reference in another post (the 840Mb/s > serdes) is what I'm basing it on (it wasn't lost on me that the > appendix in that app note happens to be OpenLDI). > > BTW, it looks to me like LVDS can drive a TMDS receiver by simply > including some series resistors (220 ohm is what I have in my notes). > I have NOT even simulated this, however, much less built it. > > Ken Ryan > Smiths AerospaceArticle: 56814
Hello, We are a group of students at Ryerson University who are working with the Xilinx RPP with the ARM7TDMI processor and the XCV2000E FPGA. What we are trying to do now is to read some data from the SDRAM located on the core module to the FPGA located on the other module. We are not sure how we can accomplish this. We were just wondering if anyone has tried to something like this before. If so, how did you accomplish that. ThanksArticle: 56815
Ken, I think there is a misunderstanding here: When we say the bits from different MGTs ( with common fref and common parallel timing) are not perfectly aligned ( and you should use channel bonding on the receive side), we are thinking of 3 Gbps, i.e. bit times of 330 picoseconds. At those bit rates, even if the bits leave the chip exactly "together", they will get dispersed just by going through different balls of the package, different pc-board traces etc. These are picosecond issues, but they destroy synchronicity at 3 Gbps. If you are running much slower bit rates, you can ignore these little errors. (Or to put it more bluntly: You will have these problems with any silicon solution you use.) If you want to be perfectly aligned, use channel bonding at the receiver, but that may be overkill in your case. BTW:Why do you need multiple channels? Is this RGB in the digital domain? Greetings Peter Alfke ================ Kenneth Ryan wrote: > > Peter, > > Alas, to do DVI (or OpenLDI) you need to be able to keep the serial > bits of the 3-4 pairs to within a fraction of a bit time skew (there's > one common reference clock for all pairs). The MGT can't do that; > multiple pairs are only guaranteed to a character or so. > > I haven't implemented it, though I've pencilled it into a couple > designs. The app note you reference in another post (the 840Mb/s > serdes) is what I'm basing it on (it wasn't lost on me that the > appendix in that app note happens to be OpenLDI). > > BTW, it looks to me like LVDS can drive a TMDS receiver by simply > including some series resistors (220 ohm is what I have in my notes). > I have NOT even simulated this, however, much less built it. > > Ken Ryan > Smiths AerospaceArticle: 56816
"rob d" <rjd@transtech-dsp.com> ha scritto nel messaggio news:e44f5c31.0306160647.530d6b28@posting.google.com... > Does anyone know whether a spartan 2e DLL can lock to a video clock > and multiply it by 2. The video is from a PAL/NTSC camera about 3 > meters away so I'm hoping that the video decoder will output a very > stable clock. I'm doing exactly that in my last design. Using a 27 Mhz clock output of a video decoder (Micronas VPX3226), and multiply internally by two to get a 54 Mhz clock for SDRAM. Works very well, so far.Article: 56817
In the RocketIO User's Guide, table 2-6 lists the TX FIFO latency as +/- 0.5 TXUSRCLOCK cycles. The attribute table says TX_BUFFER_USE must always be set to TRUE. Therefore it sounds to me like there's more than just the package/trace variance. Or is there another way to bypass it? BTW:Why do you need multiple channels? Is this RGB in the digital domain? For DVI? Yes, there's a pixel clock and three colors (sync et. al. are encoded on the color lines). kenArticle: 56818
I have been thinking about doing something similar, I see three solutions : 1) Don't use MS-DOS, use a raw disk disk/file editor and place data sector by sector this allows you to specify the exact location you want to use, but if you want easy access from PC and want to have other files this may not be an option. 2) Have software on the uP that can can read the directory and find the file and of course read the file. This is the most flexiable but requires the most software look for FreeDos via google which is a open source clone of MS-DOS that has file system code available port that code to your uP and you're done. 3) If there is some easily identifyable data at the begining of the file you could search the disk for that data or append data to the front of the file to act as a flag of the file start. "Kelvin @ Clementi" <cobraxu@singnet.com.sg> wrote in message news:<bckelg$sm9$1@reader01.singnet.com.sg>... > I learnt that it's fairly easy to download an FPGA with a microprocessor, > from the > datasheet of a PROM. My question is, if the bit-stream file is stored in a > Window > formatted disk. My uP can only read out the bitstream byte by byte. > How do I associate these bytes with the serial bit-stream we see in the > EPROM's > datasheet? > > Kelvin.Article: 56819
Peter Alfke wrote: > > I don't like to be the harbinger of bad news, but these are > industry-wide facts, and the user community has to face them, even > though it hurts. > (Have you heard of 80 W in a 3 GHz Pentium, and many watts of idle current?) > This is not your father's CMOS anymore... Can you give us any idea of what to expect for idle current on the Spartan 3 chips? I am looking at using one in a not-so-high current application (at a low clock speed). I can't make any sort of an analysis since there is no power consumption data available (at least in the April data sheet). I am just looking for an order of magnitude, nothing I will use as a fixed number. Are we talking 10's of mA or 100's of mA? -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 56820
I am looking at using standard registered transceiver chips to provide a 5 volt tolerant interface to the PC/104 bus. When I have looked at this in the past, the total chip size was always too large. But I have seen a few new parts available in a 24 pin QFN package from Philips that might be small enough. Anyone know of other makers of similar registered transceiver parts in 24 pin QFN packages? TI, IDT and Fairchild only seem to have the simpler 20 pin QFN parts. -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 56821
Does anyone here have any guestimates or general rules-of-thumb used to figure the cost of Xray inspection for a BGA part on a PCB? Here's the scenario: We are designing a board and can choose between inexpensive Cyclone FPGAs in the 240-pin TQFP, or we could move to a Stratix in the 672 BGA. The Cyclone in question costs under $30. The Stratix costs ~$230. Without going into excessive detail, if I use Cyclone, I need more expensive external CDR + SERDES. Stratix would allow me to use cheaper external CDR + SERDES due to it's higher I/O speeds. In order to decide which solution is ultimately most cost-effective, I need to have some feel for what the added cost of Xray inspection of the BGA part would be. Assume one component per board, board size = about 7" x 9" and 6 layers. I'm assuming that the cost is somewhat fixed, regardless of board quantity, as you have to do the same amount of work on each board -- quantity 1 or 1000. Thanks, PMacArticle: 56822
On Mon, 16 Jun 2003 15:16:56 -0700, Patrick MacGregor wrote: > Does anyone here have any guestimates or general rules-of-thumb used to > figure the cost of Xray inspection for a BGA part on a PCB? > > Here's the scenario: We are designing a board and can choose between > inexpensive Cyclone FPGAs in the 240-pin TQFP, or we could move to a > Stratix in the 672 BGA. The Cyclone in question costs under $30. The > Stratix costs ~$230. > > Without going into excessive detail, if I use Cyclone, I need more > expensive external CDR + SERDES. Stratix would allow me to use cheaper > external CDR + SERDES due to it's higher I/O speeds. > > In order to decide which solution is ultimately most cost-effective, I > need to have some feel for what the added cost of Xray inspection of the > BGA part would be. Assume one component per board, board size = about > 7" x 9" and 6 layers. > > I'm assuming that the cost is somewhat fixed, regardless of board > quantity, as you have to do the same amount of work on each board -- > quantity 1 or 1000. > > Thanks, > > PMac I dont think its much as it just seems to be folded into the SM assembly cost at least for the cards we do, certainly no more than a few dollars. If you had to do it separately it would be more... PCWArticle: 56823
rickman wrote: > > Peter Alfke wrote: > > > > I don't like to be the harbinger of bad news, but these are > > industry-wide facts, and the user community has to face them, even > > though it hurts. > > (Have you heard of 80 W in a 3 GHz Pentium, and many watts of idle current?) > > This is not your father's CMOS anymore... > > Can you give us any idea of what to expect for idle current on the > Spartan 3 chips? I am looking at using one in a not-so-high current > application (at a low clock speed). I can't make any sort of an > analysis since there is no power consumption data available (at least in > the April data sheet). I am just looking for an order of magnitude, > nothing I will use as a fixed number. Are we talking 10's of mA or > 100's of mA? A few are waiting on replies to this, and the silence/delays suggests the news is 'not good' ? The Pentium has been used more than once as a reference point, also not a good sign.. Present devices are 10's of mA, so we could start an informal 'canteen sweepstake' on if this will go over 100mA :) My 5c goes on 125mA....( or should I say 150mW ? ) [ Place your educated guess here ] -jgArticle: 56824
"Ed Stevens" <ed@stevens8436.fslife.co.uk> wrote in message > Anyone got any ideas on what im doing wrong? In the XILINX Project > Navigator I've generated the programming file without any errors. Ed, In the "Process" window in project Navigator, right-click on "Generate Programming File" which will open the "process properties" window. Under "Readback Options", check the "create mask file" box. Then generate the programming file again. You might have to check the "enable readback" and "allow selectmap pins to persist" box as well. Hope his helps. Regards, Pradeep
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z