Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi, I am a pre-si verification engineer looking to learn HW design, synthesis and P&R through FPGAs. (I am already very comfortable with Verilog, so that is not the issue, I really want to learn to "design+synthesize+P&R" as opposed to "code in verilog"). Recently checked out Spartan dev kit, looks very affordable. Questions: Is this the best way to go (for a hobbyist)? I am willing to spend couple 100 $$ on dev kits etc. Ive already downloaded the ISE Webpack and have full access to Modelsim at work. What books should I get for DESIGN & SYNTHESIS (again, not verilog coding but "designing" synthesizable code with Verilog). Please suggest books with and without an FPGA "tilt" or focus if possible. Once you do everything and download stuff into the FPGA, how do you run the application? For ex, if you design a H.264 encoder (just for example), and the FPGA already has the code downloaded after Synthesis and P&R stage, how can I pass an example stimulus file (ie H.264 stream) to see if it works correctly? My grand vision is to build a H.264 encoder module in an FPGA, while teaching myself good design AND H.264 encoding in the process. Am I missing anything here? Anything else I should know? ThanksArticle: 104076
Just to add, I am already comfortable with poking around RTL and large designs, I just havent done the design myself. (Victim of working in a big company :-) ) Are there any more project ideas that will add value to my resume (such as MPEG encoder/decoder etc) that I can do with an FPGA and check out at home using a PC? I have already started designing small modules (fifos etc, mostly synchrounous). Just getting into async fifo design, I still havent understood it fully. -Anand anand wrote: > Hi, > > I am a pre-si verification engineer looking to learn HW design, > synthesis and P&R through FPGAs. (I am already very comfortable with > Verilog, so that is not the issue, I really want to learn to > "design+synthesize+P&R" as opposed to "code in verilog"). > > Recently checked out Spartan dev kit, looks very affordable. > Questions: > > Is this the best way to go (for a hobbyist)? I am willing to spend > couple 100 $$ on dev kits etc. Ive already downloaded the ISE Webpack > and have full access to Modelsim at work. > > What books should I get for DESIGN & SYNTHESIS (again, not verilog > coding but "designing" synthesizable code with Verilog). Please suggest > books with and without an FPGA "tilt" or focus if possible. > > > Once you do everything and download stuff into the FPGA, how do you run > the application? > For ex, if you design a H.264 encoder (just for example), and the FPGA > already has the code downloaded after Synthesis and P&R stage, how can > I pass an example stimulus file (ie H.264 stream) to see if it works > correctly? > > > My grand vision is to build a H.264 encoder module in an FPGA, while > teaching myself good design AND H.264 encoding in the process. > > > Am I missing anything here? Anything else I should know? > > ThanksArticle: 104077
Hi, Thanks for all the replies. > But each pixel has to shoot ten bits through the wires, as you said > above. That is 1.5Gbps or 750MHz worst case frequency. Both your op > amps and your FPGAs will have trouble coping with that. > I don't understand how its 1.5Gbps. If we have a clock freq. of 150 MHz, we are serially sending one bit every 1 / 150MHz = ~6.7ns. So this means that there is a delay of (6.7 * 10 = 67ns) per pixel (because R, G, B are sent in parallel). Shouldn't this be 150MBps, and not 1.5Gbps? It seems the best course is to go with an LVDS receiver, like the ones recommended above, but my only worry is if they can be fast enough ThanksArticle: 104078
1.65 Gbps is slightly beyond the capabilities of any general-purpose I/O on all the FPGAs that I am aware of. Correct me if I am wrong. But it is very easy for the dedicated mult-gigabit transceivers available from several vendors (e.g. from Xilinx: Virtex-2Pro, Virtex-4FX and soon Virtex-5LXT). Peter Alfke ============= Thomas Womack wrote: > > One pin-pair per colour channel, sending 8->10-encoded data at up to > 1650Mbps. There's one shielding wire per pair of colour channels, and > a differential clock also. I've no idea what the signalling protocols > look like. > > Some kinds of HDTV set (as opposed to computer monitor) encrypt the > data with a stream cipher, though that only has to run at the pixel > rate rather than the bit rate. > > For the very high resolutions (2048x1536 and above), DVI goes to two > pin-pairs per channel and so up to about 4Gbps. > > TomArticle: 104079
anand wrote: > Just getting into async fifo design, I still havent understood it > fully. > Using a dual-ported RAM, most of any asynchronous FIFO design is trivial. But the devil is in the flags (FULL and EMPTY) which get de-activated by the "wrong" clock. For example: EMPTY is activated by the read clock, and of importance only in that read clock domain, but it gets de-activated by a write clock. There is no defined phase relationship between the two clocks. :-( That clock-domain crossing requires Grray-coded address counters and some trickery to avoid (or mitigate) metastability. Just google, and read some papers by sunburst or Xilinx/yours truly, like XAPP051. Peter Alfke, XilinxArticle: 104080
Thanks Peter, I just downloaded XAPP051, Can you (or someone else) provide some inputs on my post #1 above? Thanks in advance. -Anand Peter Alfke wrote: > anand wrote: > > Just getting into async fifo design, I still havent understood it > > fully. > > > Using a dual-ported RAM, most of any asynchronous FIFO design is > trivial. > But the devil is in the flags (FULL and EMPTY) which get de-activated > by the "wrong" clock. > For example: EMPTY is activated by the read clock, and of importance > only in that read clock domain, but it gets de-activated by a write > clock. There is no defined phase relationship between the two clocks. > :-( > That clock-domain crossing requires Grray-coded address counters and > some trickery to avoid (or mitigate) metastability. > > Just google, and read some papers by sunburst or Xilinx/yours truly, > like XAPP051. > > Peter Alfke, XilinxArticle: 104081
Hi All, To the OP: Silicon Image manufactures ICs that receive a serial DVI stream and convert it to various other formats (656 like serial stream, 12b DDR pixel data + HSYNC + VSYNC + DE, etc.). That is probably what you need. Anyhow, DVI is transmitted over three serial links which are 8b10b encoded, so 150MHz works out to (150MHz*3links) / 10bits = 45 MB/sec. This is not enough for HD speeds, but is enough for some VESA defined resolutions. Regards, Ljubisa Bajic Peter Alfke wrote: > 1.65 Gbps is slightly beyond the capabilities of any general-purpose > I/O on all the FPGAs that I am aware of. Correct me if I am wrong. > But it is very easy for the dedicated mult-gigabit transceivers > available from several vendors (e.g. from Xilinx: Virtex-2Pro, > Virtex-4FX and soon Virtex-5LXT). > > Peter Alfke > ============= > Thomas Womack wrote: > > > > One pin-pair per colour channel, sending 8->10-encoded data at up to > > 1650Mbps. There's one shielding wire per pair of colour channels, and > > a differential clock also. I've no idea what the signalling protocols > > look like. > > > > Some kinds of HDTV set (as opposed to computer monitor) encrypt the > > data with a stream cipher, though that only has to run at the pixel > > rate rather than the bit rate. > > > > For the very high resolutions (2048x1536 and above), DVI goes to two > > pin-pairs per channel and so up to about 4Gbps. > > > > TomArticle: 104082
The FPGA allows you to define the internal logic of a chip. After you complete the logic design and simulation, you pass it through the (Xilinx) place and route tool, which generates a bitstream of a few million bits. You feed this bitstream (in bit-serial or byte-parallel form) into the FPGA (various methods, all well-described), and then you have your custom chip, as long as you keep its Vcc up ! To make it do something, you feed its inputs and observe its outputs, just as you would with a specialized custom chip, ASIC, memory, microprocessor etc... Was that too simplistic? Peter Alfke ================== anand wrote: > Thanks Peter, I just downloaded XAPP051, > > Can you (or someone else) provide some inputs on my post #1 above? > Thanks in advance. > > -Anand > Peter Alfke wrote: > > anand wrote: > > > Just getting into async fifo design, I still havent understood it > > > fully. > > > > > Using a dual-ported RAM, most of any asynchronous FIFO design is > > trivial. > > But the devil is in the flags (FULL and EMPTY) which get de-activated > > by the "wrong" clock. > > For example: EMPTY is activated by the read clock, and of importance > > only in that read clock domain, but it gets de-activated by a write > > clock. There is no defined phase relationship between the two clocks. > > :-( > > That clock-domain crossing requires Grray-coded address counters and > > some trickery to avoid (or mitigate) metastability. > > > > Just google, and read some papers by sunburst or Xilinx/yours truly, > > like XAPP051. > > > > Peter Alfke, XilinxArticle: 104083
This is getting ever more confusing. I think we agreed already that there are roughly 2 million pixel x 60 Hz, roughly 150 million pixel per second. There are three channels, one per color, and each pixel color is represented by 8 serial bits. Encoded with 8B/10B this requires 10 bits. 150 million pixel times 10 bits = 1.5 Gigabits per second. And this obviously three times, once per color. Inside the FPGA, the pixel color is represented as 8 bits in parallel, which gets us back to a managable 150 MHz and a total of 24 chanels. The serial bit rate is too high (by a factor 2) for general purpose I/O, therefore, as I claimed, only dedicated inputs can handle such a high bitrate of 1.5 Gbps. But if you are willing to dedicate a total of 24 input pins (plus clocks), then the frequency is only 150 MHz, which any modern chip handles easily. The total traffic is roughly 4 gigabits per second. There is no way around that, unless you use compression. Can we all agree on this? Peter Alfke, Xilinx Applications ================Article: 104084
anand wrote: > Hi, > > I am a pre-si verification engineer looking to learn HW design, > synthesis and P&R through FPGAs. (I am already very comfortable with > Verilog, so that is not the issue, I really want to learn to > "design+synthesize+P&R" as opposed to "code in verilog"). > > Recently checked out Spartan dev kit, looks very affordable. > Questions: > > Is this the best way to go (for a hobbyist)? I am willing to spend > couple 100 $$ on dev kits etc. Ive already downloaded the ISE Webpack > and have full access to Modelsim at work. Probably as good as any. > > What books should I get for DESIGN & SYNTHESIS (again, not verilog > coding but "designing" synthesizable code with Verilog). Please suggest > books with and without an FPGA "tilt" or focus if possible. > I use Thomas & Moorby's "The Verilog Hardware Description Language", which is mostly about Verilog but sprinkles in guidance for synthesizable designs. > > Once you do everything and download stuff into the FPGA, how do you run > the application? You don't. What's in an FPGA isn't an "application" and it doesn't "run" in the sense that an application does. Once you've _configured_ the FPGA you let it loose, and off it goes doing whatever you coded it to do. > For ex, if you design a H.264 encoder (just for example), and the FPGA > already has the code downloaded after Synthesis and P&R stage, how can > I pass an example stimulus file (ie H.264 stream) to see if it works > correctly? > You wouldn't really run a stimulus file through it -- you'd need to use (or generate) an appropriate stream to run in real life to wiggle the inputs pins on your FPGA. > > My grand vision is to build a H.264 encoder module in an FPGA, while > teaching myself good design AND H.264 encoding in the process. > > > Am I missing anything here? Anything else I should know? > > Thanks > You might want to start out a bit simpler -- I'd suggest blinking the lights, then maybe blinking the lights under the control of the switches. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Posting from Google? See http://cfaj.freeshell.org/google/ "Applied Control Theory for Embedded Systems" came out in April. See details at http://www.wescottdesign.com/actfes/actfes.htmlArticle: 104085
fslearner wrote: > Can anybody comment on the > feasibility of this project and perhaps point me to some existing VHDL > code? Otherwise, does anyone have any other feasible ideas? As others have concurred, definitely feasible. I have implemented a read-only version of a WD1793 FDC in VHDL which interfaces to a SPI serial flash device rather than a floppy drive. Some in this thread have talked about drive pules and MFM encoding etc etc - that's *way* too low-level for your requirements. There is no need to emulate operation at the physical encoding level - in fact you'd be silly to do so! You need only emulate the register level interface of the controller and implement a means to store the data for a disk image. Depending on the actual controller, you *may* need to store a little more info than the actual sector data - for example I needed to flag which sectors had a non-standard data address mark. However considering that you don't need to read the 'images' back on the same controller, you can probably get away with fudging it. The size of your emulated floppy image is probably still going to be constrained by the size of a floppy disk - your imaging device still needs to 'see' a formatted file system and is only expecting a floppy-sized disk. However, since you can 'suck' the image off your emulated drive immediately I'd imagine you could have an arrangement such that as soon as an image is written, it is transferred to your PC and then deleted from the emulated disk?!? I started my implementation with a micro (soft IP core) rather than VHDL code. I had an interrupt firing off a write to the emulated disk controller 'command' register. It was sort-of working but wasn't quite quick enough for what I needed, so I opted for a VHDL design. However, I'm sure there's plenty of small micros around fast enough to do the job. Regards, -- Mark McDougall, Engineer Virtual Logic Pty Ltd, <http://www.vl.com.au> 21-25 King St, Rockdale, 2216 Ph: +612-9599-3255 Fax: +612-9599-3266Article: 104086
We are manufacture low cost, high quality PCB prototypes in China. Silk masks are produced for the same price Send us a PCB file and we will reply promptly with a quote. We can assemble components for your company. if any question, Please email us with your questions. We sell high quality HDI PCB's. accept delivery times are prompt.(quick turn) We can supply color PCBs board(Example: Blue, Black,Red,Green,White,yellow.) Mail: njpcb@vip.163.com or sales@njpcb.com E-mail pcb files to us and we will provide a prompt quote. We sebd PCB via DHL or UPS ( Approx 1 day from Chins to the USA) Product Name: Printed Circuit BoardPlace of Origin: ChinaFeatures: 1) Surface finish: hot air solder leveling, entice, immersion gold, carbon ink, gold finger, peelable mask 2) PCB file format: protel,powerpcb,Gerber file,Engle.... 3) Profile: routing, punching, V-Cut, chamfered 4) Number of layers for mass production: 2 - 12 5) Finished copper weight from .25 to 3oz 6) Minimum finished hole size .006" 7) Min. trace .004" - Min. space .004" 8) Max. panel size: 18" x 24" 9) Printed circuit board final thickness from .016" to .125" 10)soldermask /Legend in a variety of colors (Green,White,Black,Yellow,Red..) 11)operating costs (10 pieces ): 2 layers( about 0.048$/inch) 4 layers(about 0.096$/inch) Lead times: 2 layers( 1 days, 3 days, 5 days) Ships next day. 4 layers( 2 days, 5 days, 7 days) 6 layers( 3 days, 5 days, 8 days) Min Qty 3pcs. Web:www.n j p c b.c o m mail: sales@n j p c b.c o m Contact Person: Mr. Ding JiaBao Website: http://www.njpcb.com E-mail: njpcb@vip.163.com or sales@njpcb.com Any advice will be greatly appreciated! if any question,Please send e-mail to us,we will reply as soon.Article: 104087
>You wouldn't really run a stimulus file through it -- you'd need to use >(or generate) an appropriate stream to run in real life to wiggle the >inputs pins on your FPGA. Thanks Tim. So, if I had to test the application in real life, then I have to stimulate the input pins, correct? So, how do I go about testing on the FPGA as to whether the design actually works in real life (not in simulation)? Assume I have a H.264 stream that I can use. So in the dev board kit they have flash memory, I guess I can probably load the flash memory with the stimulus stream. But question is, how do I make it wiggle the appropriate pins of the FPGA itself, and how do I collect the output to verify it? Tim Wescott wrote: > anand wrote: > > > Hi, > > > > I am a pre-si verification engineer looking to learn HW design, > > synthesis and P&R through FPGAs. (I am already very comfortable with > > Verilog, so that is not the issue, I really want to learn to > > "design+synthesize+P&R" as opposed to "code in verilog"). > > > > Recently checked out Spartan dev kit, looks very affordable. > > Questions: > > > > Is this the best way to go (for a hobbyist)? I am willing to spend > > couple 100 $$ on dev kits etc. Ive already downloaded the ISE Webpack > > and have full access to Modelsim at work. > > Probably as good as any. > > > > What books should I get for DESIGN & SYNTHESIS (again, not verilog > > coding but "designing" synthesizable code with Verilog). Please suggest > > books with and without an FPGA "tilt" or focus if possible. > > > I use Thomas & Moorby's "The Verilog Hardware Description Language", > which is mostly about Verilog but sprinkles in guidance for > synthesizable designs. > > > > Once you do everything and download stuff into the FPGA, how do you run > > the application? > > You don't. What's in an FPGA isn't an "application" and it doesn't > "run" in the sense that an application does. Once you've _configured_ > the FPGA you let it loose, and off it goes doing whatever you coded it > to do. > > > For ex, if you design a H.264 encoder (just for example), and the FPGA > > already has the code downloaded after Synthesis and P&R stage, how can > > I pass an example stimulus file (ie H.264 stream) to see if it works > > correctly? > > > You wouldn't really run a stimulus file through it -- you'd need to use > (or generate) an appropriate stream to run in real life to wiggle the > inputs pins on your FPGA. > > > > My grand vision is to build a H.264 encoder module in an FPGA, while > > teaching myself good design AND H.264 encoding in the process. > > > > > > Am I missing anything here? Anything else I should know? > > > > Thanks > > > You might want to start out a bit simpler -- I'd suggest blinking the > lights, then maybe blinking the lights under the control of the switches. > > -- > > Tim Wescott > Wescott Design Services > http://www.wescottdesign.com > > Posting from Google? See http://cfaj.freeshell.org/google/ > > "Applied Control Theory for Embedded Systems" came out in April. > See details at http://www.wescottdesign.com/actfes/actfes.htmlArticle: 104088
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 t2531998@126.com wrote: > [spam spewage] Is there a way to filter this spam out? It appears that this guy/gal/other is morphing as he/she/it continues to spam, and I'm not seeing an obvious filter to use. - -- Steve Williams "The woods are lovely, dark and deep. steve at icarus.com But I have promises to keep, http://www.icarus.com and lines to code before I sleep, http://www.picturel.com And lines to code before I sleep." -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.5 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFElhQmrPt1Sc2b3ikRAp89AJwIhggxDz1G8S8DatzIotR64z6dgACgymQw Q7U5QiHMsSYZ4jTxLurDzW0= =ny9J -----END PGP SIGNATURE-----Article: 104089
Peter Alfke wrote: > This is getting ever more confusing. > > I think we agreed already that there are roughly 2 million pixel x 60 > Hz, roughly 150 million pixel per second. > > There are three channels, one per color, and each pixel color is > represented by 8 serial bits. Encoded with 8B/10B this requires 10 > bits. 150 million pixel times 10 bits = 1.5 Gigabits per second. And > this obviously three times, once per color. > Inside the FPGA, the pixel color is represented as 8 bits in parallel, > which gets us back to a managable 150 MHz and a total of 24 chanels. > The serial bit rate is too high (by a factor 2) for general purpose > I/O, therefore, as I claimed, only dedicated inputs can handle such a > high bitrate of 1.5 Gbps. But if you are willing to dedicate a total of > 24 input pins (plus clocks), then the frequency is only 150 MHz, which > any modern chip handles easily. > > The total traffic is roughly 4 gigabits per second. There is no way > around that, unless you use compression. > > Can we all agree on this? > Peter Alfke, Xilinx Applications > ================ What might not be communicated so far: while DVI can go up to a 165 MHz pixel clock limit (without going to a dual-DVI link) the pixel clock doesn't have to go that fast. If one is using a slow, small LCD one can get by with rates that don't exceed 300 MB/s/channel. I'm not aware (though I haven't done deep research) that there's a lower limit on the pixel clock.Article: 104090
Doesn't the "Sender" always contain "@126.com"? Stephen Williams wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > t2531998@126.com wrote: >> [spam spewage] > > Is there a way to filter this spam out? It appears that this > guy/gal/other is morphing as he/she/it continues to spam, and > I'm not seeing an obvious filter to use. > > - -- > Steve Williams "The woods are lovely, dark and deep. > steve at icarus.com But I have promises to keep, > http://www.icarus.com and lines to code before I sleep, > http://www.picturel.com And lines to code before I sleep." > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.2.5 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iD8DBQFElhQmrPt1Sc2b3ikRAp89AJwIhggxDz1G8S8DatzIotR64z6dgACgymQw > Q7U5QiHMsSYZ4jTxLurDzW0= > =ny9J > -----END PGP SIGNATURE-----Article: 104091
On Sun, 18 Jun 2006 20:04:06 -0700, Stephen Williams <spamtrap@icarus.com> wrote: > >Is there a way to filter this spam out? It appears that this >guy/gal/other is morphing as he/she/it continues to spam, and >I'm not seeing an obvious filter to use. I think it is time for everybody who uses this news group to send a short email to njpcb@vip.163.com or sales@njpcb.com and ask nicely for them to stop spamming this news group. I certainly hope no-one has given them any business, as this only encourages this obnoxious behaviour.Article: 104092
fslearner wrote: > ghelbig@lycos.com wrote: > > Odds are, this equipment is way too old to have a 3.5" floppy -or- an > > IDE port. Most likely, it's a 5.25" inch drive, mayby FAT12 360K, > > mayby not. > > > > I suppose I should have mentioned that the drive in question is > actually a 3.5" floppy. I'm not sure whether there's a spare IDE port > however. It also runs some DOS variant. In that case... What type of bus is in the system? If you can drop another card into the system, hyou have all kinds of options. There are USB drivers for DOS. If you can get that far, your choices open up significantly.Article: 104093
Antti, Is there a method of simply using the FPGA to control the OLED completely? I use my FPGA as a GPS-like DSP and want the output coordinate data displayed directly onto the OLED. Currently I send my data to a host computer to present on the screen, through the RS232 link. I want to remove all that and use only the OLED already in the FPGA board. This sounds simple but the code for this "simple" application is not straight forward, or I am missing big chunks of grey matter. I don't want to resort to C++ code or buy other kits to make this work. My guess is that OSRAM wants to force us to buy their RS-030 reference design kit to get started. -Andrew "Antti" <Antti.Lukats@xilant.com> wrote in message news:1150569349.569369.323620@h76g2000cwa.googlegroups.com... > Eric Smith schrieb: > >> "Bluespace Technologies" <bluespace@rogers.com> writes: >> > I'm trying to use the sample pictiva VHDL code from Avnet to drive a >> > mini >> > OLED (OSRAM Pictiva 128x64 pixel) on their test board, to get anything >> > presented (numerical data) on the screen. Has anyone got this device to >> > work, using sample code or other? >> >> I haven't tried yet. I am somewhat pissed off that the datasheet on the >> controller chip in the module is apparently nearly impossible to obtain, >> and >> the data sheet on the module doesn't give the details of the interface >> other >> than electrical parameters. > > datasheet for the controller IC > > http://www.mikrocontroller.net/attachment.php/347504/Treiber_IC-SSD0323_OLED_128x64_GELB.pdf > > its all on the net :) > > Antti >Article: 104094
Bluespace Technologies schrieb: > Antti, > > Is there a method of simply using the FPGA to control the OLED completely? I > use my FPGA as a GPS-like DSP and want the output coordinate data displayed > directly onto the OLED. Currently I send my data to a host computer to > present on the screen, through the RS232 link. I want to remove all that and > use only the OLED already in the FPGA board. This sounds simple but the code > for this "simple" application is not straight forward, or I am missing big > chunks of grey matter. I don't want to resort to C++ code or buy other kits > to make this work. My guess is that OSRAM wants to force us to buy their > RS-030 reference design kit to get started. > > -Andrew there is no need to buy any osram kit, the oled interface is very simple, you can easlily control it with an FPGA, but you need something to handle the protocol in the FPGA and that is easier todo with some kind of soft-core processor just connect the oled to some kind of "GPIO" port of the softcore and write the control program in any language supported by that softcore. Sure the control could be done without the softcore processor but then you need a complex statemachine what is far more difficult to design. AnttiArticle: 104095
John_H schrieb: > Peter Alfke wrote: > > This is getting ever more confusing. [] > I'm not aware (though I haven't done deep research) that there's a lower > limit on the pixel clock. "pixel clock below 22.5MHz for a duration of 1s is to be considered IDLE (link down)" this sounds like lower limit? AnttiArticle: 104096
Peter Alfke schrieb: > This is getting ever more confusing. > > I think we agreed already that there are roughly 2 million pixel x 60 > Hz, roughly 150 million pixel per second. > > There are three channels, one per color, and each pixel color is > represented by 8 serial bits. Encoded with 8B/10B this requires 10 > bits. 150 million pixel times 10 bits = 1.5 Gigabits per second. And > this obviously three times, once per color. > Inside the FPGA, the pixel color is represented as 8 bits in parallel, > which gets us back to a managable 150 MHz and a total of 24 chanels. > The serial bit rate is too high (by a factor 2) for general purpose > I/O, therefore, as I claimed, only dedicated inputs can handle such a > high bitrate of 1.5 Gbps. But if you are willing to dedicate a total of > 24 input pins (plus clocks), then the frequency is only 150 MHz, which > any modern chip handles easily. > > The total traffic is roughly 4 gigabits per second. There is no way > around that, unless you use compression. > > Can we all agree on this? > Peter Alfke, Xilinx Applications > ================ Hi Peter, I was about to reply to one of your earlier posts but it was nice weather and so I went to "English Garden" instead with children. But you have been busy responding in this thread :) DVI clock requirement for PLL lock in receiver 25MHz up to supported max (165MHz), one "symbol" is 10 bits those we have bit rate of 250MB/s to 1.65MB/s per differential pair, the electrical levels are differential, DC coupled, with termination to 3.3V transmitter differential swing 500mV (400-600mv) the swing is "centered" half down from termination, so the single ended signal would swing from 3.3V down. To my knowledge FPGA IO standards do not include and option that would support this signalling directly. Input differential margin > 150mV , ok that is compatible to FPGA LVDS inputs if they work at around 3.3V DC levels then a 50ohm termination to 3.3V and FPGA LVDS input could work. DVI clock is at 'symbol rate' those we need a PLL to lock at 10X the clock. Next thing is that we need per line clock lock to the data stream (a digital delay lock) and per line descew correction. All of the above is just that much of trouble that it makes in not reasonable even to think about implementing the DVI in FPGA without using a proper DVI receiver or transmitter IC. CH7301 has 12 bit DDR like data bus so it requires for FPGA communication 12 data (SDR or DDR) 1 (or 2) for pixeld clock (can be single ended or differential) 5 for control DE HSYNC VSYNC SDA - i2c for parameter setting SCL the chip has internal register to adjust the clock edge in order to sample the data lines at correct phase relation. the chip costs some 8 EUR, I dont have pricing on DVI receiver but I belive those prices also be in range of 10EUR. As an extra the CH7301 also includes Analog RGB DACs so you can uses either DVI or analog monitor! I hope the above explains why I a said in my first posting in this thread: "NO WAY". I agree that with some really clever tricks and some ommissions some DVI things could be done directly in the FPGA using FPGA I/O pins, but it really doesnt seem like reasonable thing todo. Unless someone has months and months of time he wishes to waste. AnttiArticle: 104097
Hi all, I'm also working with a Pictiva Oled with my Spartan3 and I was going to ask you about the best way to handle this display. From what I'm reading it seems easy to implement its control with something like Microblaze on your FPGA, while pretty harder without it and just VHDL code. I never done anything with Microblaze so far and at this point I don't know which way will be easier and shorter: keep VHDLing or moving to the never-used Microblaze or PicoBlaze if the latter would do that well. Thanks, Marco Antti ha scritto: > Bluespace Technologies schrieb: > > > Antti, > > > > Is there a method of simply using the FPGA to control the OLED completely? I > > use my FPGA as a GPS-like DSP and want the output coordinate data displayed > > directly onto the OLED. Currently I send my data to a host computer to > > present on the screen, through the RS232 link. I want to remove all that and > > use only the OLED already in the FPGA board. This sounds simple but the code > > for this "simple" application is not straight forward, or I am missing big > > chunks of grey matter. I don't want to resort to C++ code or buy other kits > > to make this work. My guess is that OSRAM wants to force us to buy their > > RS-030 reference design kit to get started. > > > > -Andrew > > there is no need to buy any osram kit, the oled interface is very > simple, you can easlily > control it with an FPGA, but you need something to handle the protocol > in the FPGA > and that is easier todo with some kind of soft-core processor just > connect the oled > to some kind of "GPIO" port of the softcore and write the control > program in any language > supported by that softcore. Sure the control could be done without the > softcore processor > but then you need a complex statemachine what is far more difficult to > design. > > AnttiArticle: 104098
Marco schrieb: > Hi all, > I'm also working with a Pictiva Oled with my Spartan3 and I was going > to ask you about the best way to handle this display. From what I'm > reading it seems easy to implement its control with something like > Microblaze on your FPGA, while pretty harder without it and just VHDL > code. I never done anything with Microblaze so far and at this point I > don't know which way will be easier and shorter: keep VHDLing or moving > to the never-used Microblaze or PicoBlaze if the latter would do that > well. > Thanks, > Marco It all depends - if you have already a MicroBlaze in the FPGA then just take the pictive OPB IP Core and use it, if you dont then well you can choose any softcore that pleases you, you can use PicoBlaze or if you are more familiar with PIC,AVR,8051 you can use those as well as fpga softcore. If resource use is at premium then PicoBlaze is very small. AnttiArticle: 104099
In article <1150669489.775416.324560@f6g2000cwb.googlegroups.com>, vans <svasanth@gmail.com> wrote: >Hi, > >Thanks for all the replies. > >> But each pixel has to shoot ten bits through the wires, as you said >> above. That is 1.5Gbps or 750MHz worst case frequency. Both your op >> amps and your FPGAs will have trouble coping with that. >> > >I don't understand how its 1.5Gbps. If we have a clock freq. of 150 >MHz, we are serially sending one bit every 1 / 150MHz = ~6.7ns. We have a *pixel clock* of 150MHz, so have to send a whole pixel, which thanks to the transmission encoding takes ten bits, on each of the three colour channels, every 6.7ns; that is, the channel frequency is 1500Mbps. I don't know whether the 'TMDS Clock' differential pair on the DVI connector oscillates once per pixel or once per bit; my guess is once per pixel, with quite complicated clock-recovery circuitry at the other end of the link to recover the bit clock. Tom
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z