Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
In article <1150662005.425025.29630@c74g2000cwc.googlegroups.com>, Peter Alfke <alfke@sbcglobal.net> wrote: >Falk, I had checked that, but my question was: how much of the traffic >is done in parallel? >If it is all done serially, then 3 to 5 Gbps can only be done by >dedicated "Multi-gigabit transceivers" and then only with encoding >methods that guarantee transitions, like 8B/10B. It's a rather odd encoding: - write down bit1, bit1^bit2, bit2^bit3, bit3^bit4 ... - also write down bit1, bit1^~bit2, bit2^~bit3, ... - pick whichever of the above has fewest transitions, write it down followed by one bit to decide whether you did the per-bit inversion or not - optionally, if a counter tracking DC balance suggests that you'd be better off with more or fewer set bits, invert all nine bits and append a one. Otherwise append a zero And it's one differential-pair per colour. TomArticle: 104101
Hi All, Last time I had to generate pin assignments for FPGA having only Protel netlist generated by someone from the Protel project. I have written a small utility which solves this problem. I hope that it may be useful for other people. As it is written in Python, it may be easily customized. -- BR, Wojtek ==== Cut below this line and save to pin_gen.py<========= #!/usr/bin/python # This simple program extracts the pin assignments from the # Protel's netlist and saves it as assignments to be put #into the UCF (Xilinx) or QSF (Quartus) file # # This program is public domain - do with it whatever you need to # Written by Wojciech M. Zabolotny (wzab@ise.pw.edu.pl) # 2006.06.16 ver. 1.0 # # The calling sequence: # pingen_py net_file chip_number written_settigs_file type # Currently type may be "altera" or "xilinx" import sys # Write pin setting in the appropriate format described by "type" def write_pin(outfile,type,pin,netname): if type=="altera": outfile.write("set_location_assignment PIN_"+pin+" -to "+netname+"\n") elif type=="xilinx": outfile.write("NET \""+netname+"\" LOC = \""+pin+"\""+"\n") else: raise "Unknown type of chip, I can't format assignment" netfile=open(sys.argv[1],"r") chip=sys.argv[2] setsfile=open(sys.argv[3],"w") chiptype=sys.argv[4] while True: g=netfile.readline() if g=="": break g=g.strip() if g=="(": # Begining of the new line # Extract the name of the net - this # is the first line after ( netname=netfile.readline().strip() if netname=="": raise "File ended in the middle of net!" # Now we are checking if the desired chip # is connected to the net while True: line=netfile.readline().strip() if line=="": raise "File ended in the middle of net!" if line==")": # End of the net break # The line should have a form: ELEMENT-PIN hyph_pos=line.find("-") element=line[0:hyph_pos] if element==chip: pin=line[(hyph_pos+1):] write_pin(setsfile,chiptype,pin,netname) setsfile.close() netfile.close()Article: 104102
In my previous post one line has wrapped: outfile.write("set_location_assignment PIN_"+pin+" -to "+netname+"\n") Please also keep in mind, that Python depends on strict lines indentation. Preserve the original indentation, when saving the source. -- BR, WojtekArticle: 104103
Thomas Womack schrieb: > In article <1150669489.775416.324560@f6g2000cwb.googlegroups.com>, > vans <svasanth@gmail.com> wrote: > >Hi, > > > >Thanks for all the replies. > > > >> But each pixel has to shoot ten bits through the wires, as you said > >> above. That is 1.5Gbps or 750MHz worst case frequency. Both your op > >> amps and your FPGAs will have trouble coping with that. > >> > > > >I don't understand how its 1.5Gbps. If we have a clock freq. of 150 > >MHz, we are serially sending one bit every 1 / 150MHz = ~6.7ns. > > We have a *pixel clock* of 150MHz, so have to send a whole pixel, > which thanks to the transmission encoding takes ten bits, on each of > the three colour channels, every 6.7ns; that is, the channel frequency > is 1500Mbps. > > I don't know whether the 'TMDS Clock' differential pair on the DVI > connector oscillates once per pixel or once per bit; my guess is once > per pixel, with quite complicated clock-recovery circuitry at the > other end of the link to recover the bit clock. > > Tom 1 clock per 10-bit symbol. each of the 3 data lines need separate delay lock to the data stream and the 3 datastreams must be aligned after decoding to restore the pixel data as there may be large skew AnttiArticle: 104104
Hi everybody, XILINX answered my webcast: "I've spoken to our XST expert and he informed me that FSM minimization isn't a feature of XST. If you have two states that are totally identical then the tool will optimise them, otherwise the FSM will be encoded with the encoding algorithm specified within the process properties. " Bad news for this topic. I'm going to check out Precision Synthesis RTL on this. Best regards EilertArticle: 104105
Antti, at this time I'm not working with Microblaze (never done... what is the Pictiva OPB IP core, a core for Microblaze to deal with Pictiva?). Being honest, I have a really small experience with VHDL and nothing also with Picoblaze. I'm learning it all now, would you suggest me to go on with just VHDL or to spend some of my time implementing the Picoblaze? In the near future I'll need to place a UART inside my Spartan3 and with Picoblaze could be easy as well, isn't it? Do I need to add external SRAM when working with these soft-cores? Thanks, MarcoArticle: 104106
One of the things you can do is to build a stimulus mechanism into your design for both early testing both for simulation and for running in the real chip. Just remember to write it to be synthesisable not testbench style. Once you are confident it is probably working introduce your real source and sort out any issues caused by it being different to your internal stimulus mechanism. For instance dealing with real setup and hold on input signals. John Adair Enterpoint Ltd. - Home of Raggedstone1. The Low Cost Spartan-3 Development Board. http://www.enterpoint.co.uk "anand" <writeanand@gmail.com> wrote in message news:1150684323.749607.70170@c74g2000cwc.googlegroups.com... > >You wouldn't really run a stimulus file through it -- you'd need to use >>(or generate) an appropriate stream to run in real life to wiggle the >>inputs pins on your FPGA. > > Thanks Tim. So, if I had to test the application in real life, then I > have to stimulate the input pins, correct? So, how do I go about > testing on the FPGA as to whether the design actually works in real > life (not in simulation)? Assume I have a H.264 stream that I can use. > > So in the dev board kit they have flash memory, I guess I can probably > load the flash memory with the stimulus stream. But question is, how do > I make it wiggle the appropriate pins of the FPGA itself, and how do I > collect the output to verify it? > > > > Tim Wescott wrote: >> anand wrote: >> >> > Hi, >> > >> > I am a pre-si verification engineer looking to learn HW design, >> > synthesis and P&R through FPGAs. (I am already very comfortable with >> > Verilog, so that is not the issue, I really want to learn to >> > "design+synthesize+P&R" as opposed to "code in verilog"). >> > >> > Recently checked out Spartan dev kit, looks very affordable. >> > Questions: >> > >> > Is this the best way to go (for a hobbyist)? I am willing to spend >> > couple 100 $$ on dev kits etc. Ive already downloaded the ISE Webpack >> > and have full access to Modelsim at work. >> >> Probably as good as any. >> > >> > What books should I get for DESIGN & SYNTHESIS (again, not verilog >> > coding but "designing" synthesizable code with Verilog). Please suggest >> > books with and without an FPGA "tilt" or focus if possible. >> > >> I use Thomas & Moorby's "The Verilog Hardware Description Language", >> which is mostly about Verilog but sprinkles in guidance for >> synthesizable designs. >> > >> > Once you do everything and download stuff into the FPGA, how do you run >> > the application? >> >> You don't. What's in an FPGA isn't an "application" and it doesn't >> "run" in the sense that an application does. Once you've _configured_ >> the FPGA you let it loose, and off it goes doing whatever you coded it >> to do. >> >> > For ex, if you design a H.264 encoder (just for example), and the FPGA >> > already has the code downloaded after Synthesis and P&R stage, how can >> > I pass an example stimulus file (ie H.264 stream) to see if it works >> > correctly? >> > >> You wouldn't really run a stimulus file through it -- you'd need to use >> (or generate) an appropriate stream to run in real life to wiggle the >> inputs pins on your FPGA. >> > >> > My grand vision is to build a H.264 encoder module in an FPGA, while >> > teaching myself good design AND H.264 encoding in the process. >> > >> > >> > Am I missing anything here? Anything else I should know? >> > >> > Thanks >> > >> You might want to start out a bit simpler -- I'd suggest blinking the >> lights, then maybe blinking the lights under the control of the switches. >> >> -- >> >> Tim Wescott >> Wescott Design Services >> http://www.wescottdesign.com >> >> Posting from Google? See http://cfaj.freeshell.org/google/ >> >> "Applied Control Theory for Embedded Systems" came out in April. >> See details at http://www.wescottdesign.com/actfes/actfes.html >Article: 104107
Hi all, can someone help me to translate following ABEL code to VHDL on Xilinx. -- BOOT pin istype "reg_d"; -- !IOWR pin; -- BOOT.clk = !IOWR; -- BOOT.aset = RESET; -- BOOT.aclr = 0; -- BOOT.d = IOD0 & BOOT_BASE # (BOOT.pin & !BOOT_BASE); -- BOOT.oe = 1; It seems to be a clocked process, but IOWR is not a real clock, it changes only sometime. The .pin according to ABEL language is a feedback, so should I define it with a temp signal associated to BOOT pin like: signal boot_tmp: std_logic; boot_temp <= IOD0 and BOOT_BASE or ( boot_temp and (not BOOT_BASE)); BOOT <= boot_tmp; Any help will be appreciated! PS: I can not find the abel to vhdl convert in ISE 8.1. StevenArticle: 104108
kia rui schrieb: > what's the difference with LVTTL, LVCMOS and 3.3V-PCI for signalling a > PCI fpga/cpld? These signalling standards define different worst case output voltages, input thresholds and a couple of other parameters that the IO pins must adhere to. A 3.3V CMOS IO more or less automatically fulfills the LVTTL standard, so the FPGA is likely to be configered identically no matter what you select. The only difference is that the timing analyzer software is told to use different voltage thresholds to define timing. PCI3.3 might also be the same drivers, maybe with adjusted slew rate. When you select PCI3.3 the timing analyzer will report slower delays because PCI defines timing for a rather high output load of 50pF. Kolja SulimmaArticle: 104109
> I don't know whether the 'TMDS Clock' differential pair on the DVI > connector oscillates once per pixel or once per bit; my guess is once > per pixel, with quite complicated clock-recovery circuitry at the > other end of the link to recover the bit clock. > If this is the case, then you can't use an LVDS receiver alone. You HAVE to have a DVI decoder / encoder, unless you want to design the synchronization circuitry (which I don't really have the time to do). Is this statement correct? Are the 10 bits transmitted one at a time, at each clock cylce? or 10 bits all at once per clock cycle (elapsed inbetween the positive edges of the clock). ThanksArticle: 104110
Riddle:Where did that +64[ns] on source/dest clk come from?? ========================================================================= Timing constraint: TS_CORECLK = PERIOD TIMEGRP "NET_CORECLK" 7.800 nS HIGH 3.900 nS WARNING:Xst:2245 - Timing constraint is not met. Clock period: 66.690ns (frequency: 14.995MHz) Total number of paths / destination ports: 164005 / 21159 Number of failed paths / ports: 75783 (46.21%) / 4474 (21.14%) ------------------------------------------------------------------------- Slack: -58.890ns Source: sym/BLKRAM (RAM) Destination: sym/tEMPTY (FF) Data Path Delay: 5.928ns (Levels of Logic = 2) Source Clock: DDR_CKFB1 rising 2.0X +64 at 0.693ns Destination Clock: DDR_CKFB1 rising +64 at 1.387ns Data Path: sym/BLKRAM (RAM) to sym/tEMPTY (FF) Gate Net Cell:in->out fanout Delay Delay Logical Name (Net Name) ---------------------------------------- ------------ RAMB16_S18_S36:CLKB->DOPB3 2 2.394 0.903 sym/BLKRAM (sym/parbit<3>) LUT4_L:I3->LO 1 0.551 0.126 sym/earlyempty1 (sym/earlyempty) LUT4:I3->O 1 0.551 0.801 sym/_n02221 (sym/_n0222) FDPE:CE 0.602 sym/tEMPTY ---------------------------------------- Total 5.928ns (4.098ns logic, 1.830ns route) (69.1% logic, 30.9% route)Article: 104111
"vans" <svasanth@gmail.com> wrote in message news:1150729725.562787.159860@y41g2000cwy.googlegroups.com... > > If this is the case, then you can't use an LVDS receiver alone. You > HAVE to have a DVI decoder / encoder, unless you want to design the > synchronization circuitry (which I don't really have the time to do). > > Is this statement correct? Are the 10 bits transmitted one at a time, > at each clock cylce? or 10 bits all at once per clock cycle (elapsed > inbetween the positive edges of the clock). > > Thanks The 10 bits are transmitted at 10x the serial rate. Antti pointed out the clock is delivered once per 10 bits of encoded color per channel with a dedicated twisted pair for each color. If your display is not a small LCD panel, you need the dedicated converter.Article: 104112
"Antti" <Antti.Lukats@xilant.com> wrote in message news:1150700330.283639.312000@c74g2000cwc.googlegroups.com... > Bluespace Technologies schrieb: > >> Antti, >> >> Is there a method of simply using the FPGA to control the OLED >> completely? I >> use my FPGA as a GPS-like DSP and want the output coordinate data >> displayed >> directly onto the OLED. Currently I send my data to a host computer to >> present on the screen, through the RS232 link. I want to remove all that >> and >> use only the OLED already in the FPGA board. This sounds simple but the >> code >> for this "simple" application is not straight forward, or I am missing >> big >> chunks of grey matter. I don't want to resort to C++ code or buy other >> kits >> to make this work. My guess is that OSRAM wants to force us to buy their >> RS-030 reference design kit to get started. >> >> -Andrew > > there is no need to buy any osram kit, the oled interface is very > simple, you can easlily > control it with an FPGA, but you need something to handle the protocol > in the FPGA > and that is easier todo with some kind of soft-core processor just > connect the oled > to some kind of "GPIO" port of the softcore and write the control > program in any language > supported by that softcore. Sure the control could be done without the > softcore processor > but then you need a complex statemachine what is far more difficult to > design. > > Antti > Antti, Well that explains the issue. The "simple" design is not so simple anymore. I was trying to write a display control without the use of a "soft core" and getting nowhere. The Avnet demo souce code assumes that the user is familiar with the MicroBlaze soft processor. The custom OPB peripheral was provided by them as optional (so they say) but it seems that I have no choice but to implement someone else's proven design. I was hoping to not have to buy more stuff!! Does anyone have a soft processor design that make this process easier, i.e. without having to buy a MicroBlaze soft processor? -AndrewArticle: 104113
> The 10 bits are transmitted at 10x the serial rate. > Antti pointed out the clock is delivered once per 10 bits of encoded color > per channel with a dedicated twisted pair for each color. > > If your display is not a small LCD panel, you need the dedicated converter. Can anyone recommend a (possibly cheap?) DVI converter that can decode and encode? ThanksArticle: 104114
vans schrieb: > > I don't know whether the 'TMDS Clock' differential pair on the DVI > > connector oscillates once per pixel or once per bit; my guess is once > > per pixel, with quite complicated clock-recovery circuitry at the > > other end of the link to recover the bit clock. > > > > If this is the case, then you can't use an LVDS receiver alone. You > HAVE to have a DVI decoder / encoder, unless you want to design the > synchronization circuitry (which I don't really have the time to do). > > Is this statement correct? Are the 10 bits transmitted one at a time, > at each clock cylce? or 10 bits all at once per clock cycle (elapsed > inbetween the positive edges of the clock). > > Thanks sure the 10 bits on each of the data lines are transmitted within 1 clock on the clock line. and you must assume that the skew between the 3 data lines is unknown and may be more than 1 clock on clock line, ie you first need to find the good point to slice the data on each data line in clock domain 10x and then you need to symbol align to get get symbols that belong together to appera at same pixel clock as the skew may shift them into different pix clock phases. AnttiArticle: 104115
vans schrieb: > > The 10 bits are transmitted at 10x the serial rate. > > Antti pointed out the clock is delivered once per 10 bits of encoded color > > per channel with a dedicated twisted pair for each color. > > > > If your display is not a small LCD panel, you need the dedicated converter. > > Can anyone recommend a (possibly cheap?) DVI converter that can decode > and encode? > > Thanks transmitter CH7301C for receiver look at www.ti.com AnttiArticle: 104116
In article <1150706058.047529.137800@y41g2000cwy.googlegroups.com>, Antti <Antti.Lukats@xilant.com> wrote: >1 clock per 10-bit symbol. >each of the 3 data lines need separate delay lock to the data stream >and the 3 datastreams must be aligned after decoding to restore the >pixel data as there may be large skew What information is there in the pixel data on which you can align the bits? If I understand the 8->10 encoding scheme correctly, any stream of bits decodes to a valid sequence of bytes; is there some kind of synchronisation data sent during the periods when, on a CRT, the electron beam would be returning to line-starts or to the top of the screen? I can see roughly how you would construct a clock at 10x the frequency of the clock presented, and align it using a variable-tap delay line so that its rising edges correspond to some convenient point in the middle of a symbol bit, but I don't see how you could ensure that it started at bit 0 of a symbol, and even less how you could lock to the first pixel of a scanline. TomArticle: 104117
Bluespace Technologies schrieb: > "Antti" <Antti.Lukats@xilant.com> wrote in message > news:1150700330.283639.312000@c74g2000cwc.googlegroups.com... > > Bluespace Technologies schrieb: > > > >> Antti, > >> > >> Is there a method of simply using the FPGA to control the OLED > >> completely? I > >> use my FPGA as a GPS-like DSP and want the output coordinate data > >> displayed > >> directly onto the OLED. Currently I send my data to a host computer to > >> present on the screen, through the RS232 link. I want to remove all that > >> and > >> use only the OLED already in the FPGA board. This sounds simple but the > >> code > >> for this "simple" application is not straight forward, or I am missing > >> big > >> chunks of grey matter. I don't want to resort to C++ code or buy other > >> kits > >> to make this work. My guess is that OSRAM wants to force us to buy their > >> RS-030 reference design kit to get started. > >> > >> -Andrew > > > > there is no need to buy any osram kit, the oled interface is very > > simple, you can easlily > > control it with an FPGA, but you need something to handle the protocol > > in the FPGA > > and that is easier todo with some kind of soft-core processor just > > connect the oled > > to some kind of "GPIO" port of the softcore and write the control > > program in any language > > supported by that softcore. Sure the control could be done without the > > softcore processor > > but then you need a complex statemachine what is far more difficult to > > design. > > > > Antti > > > > Antti, > > Well that explains the issue. The "simple" design is not so simple anymore. > I was trying to write a display control without the use of a "soft core" and > getting nowhere. The Avnet demo souce code assumes that the user is familiar > with the MicroBlaze soft processor. The custom OPB peripheral was provided > by them as optional (so they say) but it seems that I have no choice but to > implement someone else's proven design. I was hoping to not have to buy more > stuff!! Does anyone have a soft processor design that make this process > easier, i.e. without having to buy a MicroBlaze soft processor? > > -Andrew as I said, you can take any soft-core you like, pretty much all of them have some sort of "GPIO", then connect the OLED to the GPIO and write software bit bang low level functions to transfer single bytes to the OLED, and then som upper level functions that deal with character rom and stuff like that. The OLED is an graphical display with no integrated char rom and needs init values pretty different from power up defaults in order to turn itself on. You can expect a few wires and flips in an FPGA to perform the init process and then do the character display emulation on graphic screen for you. Microblaze is not at all needed. Any soft-core that has a few kilobytes of code memory and at least 1 kb of data memory (for the character rom) could be used. In the case of PicoBlaze you need to kludge a bit as there is no place for character rom, what should be connected as io peripheral. Other softcores that can access larger memeries would be better suited. It all depends what you prefer http://hydraxc.xilant.com/CMS/index.php?option=com_remository&Itemid=41&func=fileinfo&id=7 XCAPP004 is open-source T51 based system running intel 8051 basic there are sufficient io ports for the connection to the OLED I think you could even write the all thing in basic :) or write it in 8051 assembler or in C just one example. there are other possible cores that smaller than T51 AnttiArticle: 104118
Thomas Womack schrieb: > In article <1150706058.047529.137800@y41g2000cwy.googlegroups.com>, > Antti <Antti.Lukats@xilant.com> wrote: > > >1 clock per 10-bit symbol. > > >each of the 3 data lines need separate delay lock to the data stream > >and the 3 datastreams must be aligned after decoding to restore the > >pixel data as there may be large skew > > What information is there in the pixel data on which you can align the > bits? If I understand the 8->10 encoding scheme correctly, any stream > of bits decodes to a valid sequence of bytes; is there some kind of > synchronisation data sent during the periods when, on a CRT, the > electron beam would be returning to line-starts or to the top of the > screen? > > I can see roughly how you would construct a clock at 10x the frequency > of the clock presented, and align it using a variable-tap delay line > so that its rising edges correspond to some convenient point in the > middle of a symbol bit, but I don't see how you could ensure that it > started at bit 0 of a symbol, and even less how you could lock to the > first pixel of a scanline. > > Tom May I suggest you read the DVI specification ? AnttiArticle: 104119
How about a freely available PicoBlaze microcontroller? I assume you can probably write assembly code for the PicoBlaze and that in turn drives the stimulus to the chip. As for the collection/monitoring of outputs, I believe the Picoblaze can read from the FPGA output pins and either do print statements or populate its own registers for offline reading. Would this work? John Adair wrote: > One of the things you can do is to build a stimulus mechanism into your > design for both early testing both for simulation and for running in the > real chip. Just remember to write it to be synthesisable not testbench > style. Once you are confident it is probably working introduce your real > source and sort out any issues caused by it being different to your internal > stimulus mechanism. For instance dealing with real setup and hold on input > signals. > > John Adair > Enterpoint Ltd. - Home of Raggedstone1. The Low Cost Spartan-3 Development > Board. > http://www.enterpoint.co.uk > > > "anand" <writeanand@gmail.com> wrote in message > news:1150684323.749607.70170@c74g2000cwc.googlegroups.com... > > >You wouldn't really run a stimulus file through it -- you'd need to use > >>(or generate) an appropriate stream to run in real life to wiggle the > >>inputs pins on your FPGA. > > > > Thanks Tim. So, if I had to test the application in real life, then I > > have to stimulate the input pins, correct? So, how do I go about > > testing on the FPGA as to whether the design actually works in real > > life (not in simulation)? Assume I have a H.264 stream that I can use. > > > > So in the dev board kit they have flash memory, I guess I can probably > > load the flash memory with the stimulus stream. But question is, how do > > I make it wiggle the appropriate pins of the FPGA itself, and how do I > > collect the output to verify it? > > > > > > > > Tim Wescott wrote: > >> anand wrote: > >> > >> > Hi, > >> > > >> > I am a pre-si verification engineer looking to learn HW design, > >> > synthesis and P&R through FPGAs. (I am already very comfortable with > >> > Verilog, so that is not the issue, I really want to learn to > >> > "design+synthesize+P&R" as opposed to "code in verilog"). > >> > > >> > Recently checked out Spartan dev kit, looks very affordable. > >> > Questions: > >> > > >> > Is this the best way to go (for a hobbyist)? I am willing to spend > >> > couple 100 $$ on dev kits etc. Ive already downloaded the ISE Webpack > >> > and have full access to Modelsim at work. > >> > >> Probably as good as any. > >> > > >> > What books should I get for DESIGN & SYNTHESIS (again, not verilog > >> > coding but "designing" synthesizable code with Verilog). Please suggest > >> > books with and without an FPGA "tilt" or focus if possible. > >> > > >> I use Thomas & Moorby's "The Verilog Hardware Description Language", > >> which is mostly about Verilog but sprinkles in guidance for > >> synthesizable designs. > >> > > >> > Once you do everything and download stuff into the FPGA, how do you run > >> > the application? > >> > >> You don't. What's in an FPGA isn't an "application" and it doesn't > >> "run" in the sense that an application does. Once you've _configured_ > >> the FPGA you let it loose, and off it goes doing whatever you coded it > >> to do. > >> > >> > For ex, if you design a H.264 encoder (just for example), and the FPGA > >> > already has the code downloaded after Synthesis and P&R stage, how can > >> > I pass an example stimulus file (ie H.264 stream) to see if it works > >> > correctly? > >> > > >> You wouldn't really run a stimulus file through it -- you'd need to use > >> (or generate) an appropriate stream to run in real life to wiggle the > >> inputs pins on your FPGA. > >> > > >> > My grand vision is to build a H.264 encoder module in an FPGA, while > >> > teaching myself good design AND H.264 encoding in the process. > >> > > >> > > >> > Am I missing anything here? Anything else I should know? > >> > > >> > Thanks > >> > > >> You might want to start out a bit simpler -- I'd suggest blinking the > >> lights, then maybe blinking the lights under the control of the switches. > >> > >> -- > >> > >> Tim Wescott > >> Wescott Design Services > >> http://www.wescottdesign.com > >> > >> Posting from Google? See http://cfaj.freeshell.org/google/ > >> > >> "Applied Control Theory for Embedded Systems" came out in April. > >> See details at http://www.wescottdesign.com/actfes/actfes.html > >Article: 104120
Hi until today I assumed there is no free eval of the Impulse-C at all, but I was wrong, the non-time limited version is heavily size limited and only for Xilinx and does not support the PPC APU, but otherwise it is useable for small designs. Unfortunatly the only example supplied with the codeveloper seems to crash (the Impulse C team is trying to help to sort it out) but even so I managed to make a really simple and small blink a LED demo that actually works in FPGA (it worked first time downloaded into the FPGA!) http://hydraxc.xilant.com/CMS/index.php?option=com_remository&Itemid=41&func=fileinfo&id=13 the download archive includes the impulse-C project and sources and the ISE project and sources. It's just a counter and toggle flip on overflow, but can be used to get started or even to really writes some small application with impulse-C I hope it is of some interest, as it is an design that can be tested out on any Xilinx FPGA without the use of EDK. AnttiArticle: 104121
Hey Guys, Just realized, that the links specified below need to be modified. Here are the correct links: PCI Express Architecture: http://www.altera.com/products/ip/images/pci_express_fig1_typ_app.pdf x8 IP core: http://www.altera.com/products/ip/iup/pci-express/m-alt-pcie8.html Altera Stratix II GX based PCI Express Development Kit: http://www.altera.com/products/devkits/altera/kit-pciexpress_s2gx.html Overall Altera PCI Express solutions: http://www.altera.com/technology/high_speed/protocols/pci_exp/pro-pci_exp.html Regards, Aashish Aashish Malhotra wrote: > Hi Jerome, Sylvan, > > Looking at your architecture requirements, I would say that your need > is to design an endpoint solution. Here is a link that would perhaps > help you see where an endpoint solution sits in the model PCI Express > system architecture. > http://www/products/ip/images/pci_express_fig1_typ_app.pdf > > Going a little bit further, Altera devices (and I think Xilinx is the > same) are capable of doing both root complex as well as end point > applications. Altera offers a complete portfolio of solutions for PCI > Express applications, some highlights as it pertains to Stratix II GX > are given below: > End point IP Core (x1, x4, x8) with proven interoperability at the PCI > SIG (e.g. x8 IP core: > http://www/products/ip/iup/pci-express/m-alt-pcie8.html) > FPGA (Stratix II GX) with embedded transceivers that are shipping today > Stratix II GX based PCI Express development kit orderable today > (http://www/products/devkits/altera/kit-pciexpress_s2gx.html) > > Besides Stratix II GX, Altera supports PCI Express on Cyclone II, > Stratix II as well as Stratix GX. Here is a link that would provide you > with an overall idea of Altera PCI Express solutions: > http://www.altera.com/technology/high_speed/protocols/pci_exp/pro-pci_exp.html > > Please feel free to contact me if you have any questions > (amalhotr@altera.com) > > Regards, > Aashish > > Sylvain Munaut <SomeOne@SomeDomain.com> wrote: > > > Looking to use some Xilinx V4FX or Altera Stratix GXparts for designing > > > several endpoints using PCI Express for the 1st time. I dont have the > > > PCI-Express spec yet but am wondering whether I need a root complex? > > > > > > My understanding is that root complex differs from endpoint in that it > > > is used > > > for interfacing to main CPU memory. > > > However if I only want to communicate between various FPGA endpoints > > > (using a switch) do I really need a root complex IP bridge? Seems like > > > all IP cores > > > are endpoints only? > > > > > > Any Xilinx or Altera PCI Express IP core recommendations based on > > > experience? > > > > I would guess it's more or less like PCI and you need someone to > > enumerate the busses and assign address. > > > > > > SylvainArticle: 104122
Hi, I'm new to Aurora and am having trouble simulating the example design the core generated. When I execute sample_test.do in ModelSim, nothing is clocked, and many of the signals remain undefined. I think I'm missing a library, because there are a few modules that are called but don't exist in the project, specifically MGT0_GT11. I've downloaded all of the service packs and updates, but only have evaluation versions of both ISE and ModelSim; could that be the problem? Do I need to configure the MGT beforehand and integrate it into the core? If so, how do I do that? Thanks, CatherineArticle: 104123
I'm using Xilinx 8.1 and would like to find a way to have bitgen create its output files with a different basename from the default. It looks like ISE propagates the top level Verilog module as the basename for all the subsequent files. Is there any way to override this behaviour? I don't think the 'other bitgen command line options" menu choice works for this. Any ideas? Thanks! John ProvidenzaArticle: 104124
Providing your depth of vectors isn't too large you could do it this way. The standard PicoBlaze has a limited code space and you may need to look at extending it's capabilities by adding on some bits, maybe just a store blockram, if you need a larger depth of vectors. Personally I would be tempted just to use a preloaded blockram with a counter set to wrap around at an appropriate value but that method assumes you have the necessary vectors to start with. John Adair Enterpoint Ltd. - Home of Broaddown2. The Ultimate Spartan3 Development Board. http://www.enterpoint.co.uk "anand" <writeanand@gmail.com> wrote in message news:1150734727.957006.204090@p79g2000cwp.googlegroups.com... > How about a freely available PicoBlaze microcontroller? I assume you > can probably write assembly code for the PicoBlaze and that in turn > drives the stimulus to the chip. > > As for the collection/monitoring of outputs, I believe the Picoblaze > can read from the FPGA output pins and either do print statements or > populate its own registers for offline reading. > > > Would this work? > > > > John Adair wrote: >> One of the things you can do is to build a stimulus mechanism into your >> design for both early testing both for simulation and for running in the >> real chip. Just remember to write it to be synthesisable not testbench >> style. Once you are confident it is probably working introduce your real >> source and sort out any issues caused by it being different to your >> internal >> stimulus mechanism. For instance dealing with real setup and hold on >> input >> signals. >> >> John Adair >> Enterpoint Ltd. - Home of Raggedstone1. The Low Cost Spartan-3 >> Development >> Board. >> http://www.enterpoint.co.uk >> >> >> "anand" <writeanand@gmail.com> wrote in message >> news:1150684323.749607.70170@c74g2000cwc.googlegroups.com... >> > >You wouldn't really run a stimulus file through it -- you'd need to >> > >use >> >>(or generate) an appropriate stream to run in real life to wiggle the >> >>inputs pins on your FPGA. >> > >> > Thanks Tim. So, if I had to test the application in real life, then I >> > have to stimulate the input pins, correct? So, how do I go about >> > testing on the FPGA as to whether the design actually works in real >> > life (not in simulation)? Assume I have a H.264 stream that I can use. >> > >> > So in the dev board kit they have flash memory, I guess I can probably >> > load the flash memory with the stimulus stream. But question is, how do >> > I make it wiggle the appropriate pins of the FPGA itself, and how do I >> > collect the output to verify it? >> > >> > >> > >> > Tim Wescott wrote: >> >> anand wrote: >> >> >> >> > Hi, >> >> > >> >> > I am a pre-si verification engineer looking to learn HW design, >> >> > synthesis and P&R through FPGAs. (I am already very comfortable >> >> > with >> >> > Verilog, so that is not the issue, I really want to learn to >> >> > "design+synthesize+P&R" as opposed to "code in verilog"). >> >> > >> >> > Recently checked out Spartan dev kit, looks very affordable. >> >> > Questions: >> >> > >> >> > Is this the best way to go (for a hobbyist)? I am willing to spend >> >> > couple 100 $$ on dev kits etc. Ive already downloaded the ISE >> >> > Webpack >> >> > and have full access to Modelsim at work. >> >> >> >> Probably as good as any. >> >> > >> >> > What books should I get for DESIGN & SYNTHESIS (again, not verilog >> >> > coding but "designing" synthesizable code with Verilog). Please >> >> > suggest >> >> > books with and without an FPGA "tilt" or focus if possible. >> >> > >> >> I use Thomas & Moorby's "The Verilog Hardware Description Language", >> >> which is mostly about Verilog but sprinkles in guidance for >> >> synthesizable designs. >> >> > >> >> > Once you do everything and download stuff into the FPGA, how do you >> >> > run >> >> > the application? >> >> >> >> You don't. What's in an FPGA isn't an "application" and it doesn't >> >> "run" in the sense that an application does. Once you've _configured_ >> >> the FPGA you let it loose, and off it goes doing whatever you coded it >> >> to do. >> >> >> >> > For ex, if you design a H.264 encoder (just for example), and the >> >> > FPGA >> >> > already has the code downloaded after Synthesis and P&R stage, how >> >> > can >> >> > I pass an example stimulus file (ie H.264 stream) to see if it works >> >> > correctly? >> >> > >> >> You wouldn't really run a stimulus file through it -- you'd need to >> >> use >> >> (or generate) an appropriate stream to run in real life to wiggle the >> >> inputs pins on your FPGA. >> >> > >> >> > My grand vision is to build a H.264 encoder module in an FPGA, while >> >> > teaching myself good design AND H.264 encoding in the process. >> >> > >> >> > >> >> > Am I missing anything here? Anything else I should know? >> >> > >> >> > Thanks >> >> > >> >> You might want to start out a bit simpler -- I'd suggest blinking the >> >> lights, then maybe blinking the lights under the control of the >> >> switches. >> >> >> >> -- >> >> >> >> Tim Wescott >> >> Wescott Design Services >> >> http://www.wescottdesign.com >> >> >> >> Posting from Google? See http://cfaj.freeshell.org/google/ >> >> >> >> "Applied Control Theory for Embedded Systems" came out in April. >> >> See details at http://www.wescottdesign.com/actfes/actfes.html >> > >
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z