Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
> Now it's the tricky stuff. Any one have an idea how I can change the > refresh rate while the RAM is in operation? Why?Article: 125351
Hi, I'm looking for a very compact TCP/IP stack for the Nios II, and the ThreadX/NetX combo seems to be, at least on paper, the smallest. Anyone using this combo? How small are you able to build it, and with what services? How is their support? Any issues integrating it with the Nios IDE? Thanks, PaulArticle: 125352
Hi, I'm looking for a very compact TCP/IP stack for the Nios II, and the ThreadX/NetX combo seems to be, at least on paper, the smallest. Anyone using this combo? How small are you able to build it, and with what services? How well does it work? How is their support? Thanks, PaulArticle: 125353
Hi, I'm looking for a very compact TCP/IP stack for the Nios II. The ThreadX/NetX combo seems to be, on paper at least, the smallest. Anyone using this? How small are you able to build it, and with what services? Any issues integrating it with the Nios IDE? How about express logic support? Thanks, PaulArticle: 125354
Hi, I'm looking for a very compact TCP/IP stack for the Nios II. The ThreadX/NetX combo seems to be, on paper at least, the smallest. Anyone using this? How small are you able to build it, and with what services? Any issues integrating it with the Nios IDE? How about express logic support? Thanks, PaulArticle: 125355
Hi, I'm looking for a very compact TCP/IP stack for the Nios II. The ThreadX/NetX combo seems to be, on paper at least, the smallest. Anyone using this? How small are you able to build it, and with what services? Any issues integrating it with the Nios IDE? How about express logic support? Thanks, PaulArticle: 125356
On 23 Okt., 15:28, "PaulK" <paulspamdotkinzers...@quadtechworldspam.comspam> wrote: > Hi, > > I'm looking for a very compact TCP/IP stack for the Nios II, and the > ThreadX/NetX combo seems to be, at least on paper, the smallest. Anyone > using this combo? How small are you able to build it, and with what > services? How well does it work? How is their support? > > Thanks, > Paul could you please ask the very same question another 99 times?Article: 125357
XILINX CDs please send e-mail to : ola 'AT' mail 'DOT' gr , ola3 'AT' mailbox 'DOT' gr , www 'DOT' 20000plusdvdsandcds 'DOT' tk , ( please substitute 'AT' with '@' , and 'DOT' with '.' ) , ola@mail.gr, ola3@mailbox.gr, www.20000plusdvdsandcds.tk --------------------------------------- XiLiNX FOUNDATION V2.1i UTILITIES CD NR 2498 Xilinx.Foundation.Series.ISE.3.3i.with.sp8 [2 CDs] CD NR 6455 XILINX ISE v4.2i [4 CDs] CD1=Needs this Registration #: 8740-3275-9409 CD4=Needs a valid license that is stored in the\LICENSE subdir CD NR 8385 ISE V5.1I FOUNDATION (c) XILINX [3 CDs] Xilinx Alliance : 6656-1574-4225 Xilinx Foundation : 3386-2772-9476 CD NR 10 101 XiliNX Embedded Development Kit NFO.Fix Serial # : 8666-7571-3864 CD NR 11 058 Xilinx ISE 5.2i (c) Xilinx SN: 0827-8579-8461 [3 CDs] CD NR 11 745 Xilinx ISE 6.1i (c) Xilinx SN: 9698-3904-9109 [2 CDs] CD NR 13 275 Xilinx ISE 6.2i (c) Xilinx [2 CDs] CD NR 15 108 Xilinx Embedded Development Kit 6.2i (c) Xilinx CD NR 15 344 Xilinx ISE 6.3 (c) Xilinx [2 CDs] CD NR 16 659 ISE V6.3I PROPER (c) XILINX [3 CDs] CD NR 16 713 XILINX EDK V6.3 (c) XILINX CD NR 17 190 Xilinx ISE v7.1i [3 CDs] CD NR 17 801 2004/12/16 Xilinx.EDK.v6.3 1CD 2005/03/10 Xilinx.ISE.v7.1i 3CD 2005/04/30 Xilinx.EDK.v7.1 1CD 2005/05/17 Xilinx Chipscope Pro v7.1I 2005/08/14 XILINX.Embedded.Development.Kit.and.XPS.V7.1.Incl.Sp2.Win/ Linux 2CD 2005/11/14 Xilinx.ISE.v7.1i.Linux 3CD 2005/11/14 Xilinx.ISE.v7.1i.Linux.X64 3CD 2005/12/31 Xilinx.ISE.v8.1i 1DVD 2006/04/03 Xilinx.EDK.v8.1 1DVD 2006/06/10 Xilinx.PlanAhead.v8.1(Win/Linux) 2006/07/06 Xilinx.ISE.v8.2i 1DVD 2006/08/31 Xilinx.PlanAhead.v8.2.1(Win/Linux/Solaris) 16.09.2006 Xilinx PlanAhead v8.2.2. 01.10.2006 Xilinx PlanAhead v8.2.3 Xilinx ChipScope Pro v8.2.03i 28.11.2006 Xilinx PlanAhead v8.2.6 20.10.2006 Xilinx PlanAhead v8.2.4 2007/03/22 Xilinx.EDK.v8.2 1DVD 2007/04/14 Xilinx.EDK.v9.1 1DVD -------------------------------------------------Article: 125358
Hello all, I need to develop a broadcast application using FPGA, but I am new to FPGA. I thus consider buying a demo board. At least, I need 1 SDI output and 1 SDI input, 1 Gigabit Ethernet port, and some RAM (for buffering). What's the best choice? Do I need to buy first a low cost board just to learn? Is it ok to learn on more expensive / complex boards? Any information on how to start FPGA programming is greatly appreciated. Tks, GikiArticle: 125359
Sorry, I had issues with my profile, which are now fixed. "Antti" <Antti.Lukats@googlemail.com> wrote in message news:1193147430.202133.34740@i13g2000prf.googlegroups.com... > On 23 Okt., 15:28, "PaulK" > <paulspamdotkinzers...@quadtechworldspam.comspam> wrote: >> Hi, >> >> I'm looking for a very compact TCP/IP stack for the Nios II, and the >> ThreadX/NetX combo seems to be, at least on paper, the smallest. Anyone >> using this combo? How small are you able to build it, and with what >> services? How well does it work? How is their support? >> >> Thanks, >> Paul > > could you please ask the very same question another 99 times? >Article: 125360
KJ wrote: > <sendthis@gmail.com> wrote in message > news:1193118296.434575.124270@k35g2000prh.googlegroups.com... >> Hi, >> >> I'm trying to control a SDR SDRAM (Micron 64Mbit chip) using an Altera >> DE2 board. I've gotten the hardware interface squared away (thanks >> everyone for your help!). >> >> Now it's the tricky stuff. Any one have an idea how I can change the >> refresh rate while the RAM is in operation? >> > The most obvious question would be 'Why?' > >> I have the DRAM interface built using the SOPC builder that comes with >> Quartus II using the NIOS II system. >> > That will limit your options (as would probably most other vendor IP DRAM > controllers). > >> I know you can change the refresh rate during the build but I need a >> way to change the refresh rate during operation. The only thing I can >> think of is maybe change the clock speed? I have it running off a >> 50Mhz clock.... >> > A simpler way would be to simply have a DRAM controller that has an explicit > 'Refresh Request' input that would cause the controller to perform a > refresh. Then connect that input up to any programmable timer or other > logic that you would like to use. Changing the clock rate would be far down > on my list of ways to accomplish your goal....but again, it begs the > original question about why you would want to change the refresh rate > dynamically at all. Assuming he has a good reason to change it, the safest thing to do would be to call a routine in flash to change it.Article: 125361
Hi all, I'm trying to develop a core that is connected to the PLB bus and uses FIFO to communicate to the PPC. The FPGA I'm using is a virtex2pro on a XUP board. When I go through the process to create a new device and I select "VHDL" stub everything seem to work fine and the self test returns the expected results. When instead I use the verilog stub i got this result: Running FIFO_SelfTest() for LEDs_4Bit... ****************************** * User Peripheral Self Test ****************************** RST/MIR test... - write 0x0000000A to software reset register - read 0x10050308 (expected) from module identification register - RST/MIR write/read passed User logic slave module test... - write 1 to slave register 0 upper and lower portion - read 1, 1 from register 0 upper and lower portion - write 2 to slave register 1 upper and lower portion - read 2, 2 from register 1 upper and lower portion - slave register write/read passed User logic BRAM test... - local BRAM address is 0xCDE00000 - write pattern to local BRAM and read back - write/read BRAM failed on address 0xCDE00000 FIFO_SelfTest FAILED. The XPS I use is 9.1i. Does anyone else get this message before? I'm quite sure that I set correctly all the memory addresses in the HW modules and in the drivers. Thanks ~AndreaArticle: 125362
On Oct 22, 9:10 pm, Antti <Antti.Luk...@googlemail.com> wrote: > On 22 Okt., 20:11, Jon Elson <el...@wustl.edu> wrote: > > > > > Antti wrote: > > > On 21 Okt., 22:26, Wei Wang <camww...@gmail.com> wrote: > > > >>Hi, > > > >>Just wondering what could go wrong when synthesizing memories (i- > > >>cache, d-cache, and write buffer) of microprocessors on fpga Xilinx > > >>Virtex 5? Thanks! > > > >>Wei > > > > anything ;) > > > You beat me to it! Absolutely, where do you PLAN to put your mistakes?! > > > Jon > > ROTFL > eh, really there are TOO MANY things that can go wrong. > also the OP question was really too generic :) > > the frequency of the mistake is reverse proportional to the time left > to the deadline > hm, not very elegantly said, but think some will agree to that. > extreme TTM pressure will in most cases cause more mistakes to be > done. > > there is nothing wrong making mistakes, what counts is ability to find > and fix them > quick no matter the deadline pressure and hopefully not to repeat them > too often ;) > > so what can go wrong? if you are doing some own soft-core for > Virtex-5 ? > well how old is EDK? version is 9.2 now. I started to work and _fight_ > with > the grandpa of EDK - a product called V2PDK. EDK has evolved great > deal. > but EDK 9.1SP2 can still not initialize a block of RAM when the size > of it is > say 24KB. If those 24KB are all you have and you need all of that, but > EDK 9.1SP2 > is only able to initialize the first 16K and not the rest to due the > BMM mapping bug?? > this means that the thing with SoC memory support aint so easy, or > that > hmm that Xilinx is doing a VERY BAD job at it, whatever you prefer. > > if you start from scratch, you will certainly get something wrong. > this is what was the reasoning behind "anything" > > Antti Thanks for all the inputs, well, I now get a better understanding of the range of ram integration issues. I (newbie) personally find it is not easy to debug this type of bugs if something goes wrong with ram integration (either inferring or instantiation), not knowing whether it is a synthesis tool problem or verilog code problem. Nowadays, indeed anything can go wrong, core IPs can go wrong, synthesis tools can go wrong (I once very wrongly trust synplify_pro 8.8.0.2).Article: 125363
sendthis@gmail.com wrote: > > I'm trying to control a SDR SDRAM (Micron 64Mbit chip) using an > Altera DE2 board. I've gotten the hardware interface squared away > (thanks everyone for your help!). > > Now it's the tricky stuff. Any one have an idea how I can change > the refresh rate while the RAM is in operation? > > I have the DRAM interface built using the SOPC builder that comes > with Quartus II using the NIOS II system. > > I know you can change the refresh rate during the build but I need > a way to change the refresh rate during operation. The only thing > I can think of is maybe change the clock speed? I have it running > off a 50Mhz clock.... Since the only purpose of the refresh circuitry is to avoid the memory dropping bits, it should already be running at the slowest possible rate, and speed reduction will be harmful, while speed increase will do no good. So this is not a good idea. What are you trying to do? -- Chuck F (cbfalconer at maineline dot net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net> -- Posted via a free Usenet account from http://www.teranews.comArticle: 125364
>Since the only purpose of the refresh circuitry is to avoid the >memory dropping bits, it should already be running at the slowest >possible rate, and speed reduction will be harmful, while speed >increase will do no good. So this is not a good idea. > >What are you trying to do? Although it's not expressed in DRAM specs and you wouldn't want to rely on it, the effect of reducing refresh rate is to increase the access time. I'm not up-to-date with DRAM technology, but my experience with devices 30 years ago was that you could turn off refresh (and all other access) for 10s or more without losing the contents, provided you weren't pushing the device to its access time limits. So, it's not impossible that reducing refresh rate would have a use (albeit outside the published device spec). But, as you suggest, it would help if he would just tell us what he's trying to do. MikeArticle: 125365
<MikeShepherd564@btinternet.com> wrote in message news:1evsh3ds7i44iqhrsc4kldthlo2vb0tul2@4ax.com... > > Although it's not expressed in DRAM specs and you wouldn't want to > rely on it, the effect of reducing refresh rate is to increase the > access time. I'm not up-to-date with DRAM technology, but my > experience with devices 30 years ago was that you could turn off > refresh (and all other access) for 10s or more without losing the > contents, provided you weren't pushing the device to its access time > limits. > > So, it's not impossible that reducing refresh rate would have a use > (albeit outside the published device spec). But, as you suggest, it > would help if he would just tell us what he's trying to do. > > Mike Although that may well be the case for asynchronous DRAMs (because the reduced charge in the memory cell capacitor would mean that the sense amplifier took longer to register the state), this would not be the case for SDRAM since this registers the outputs a fixed number of clocks after the access starts. If the underlying access time increased by too much then the data would just be wrong.Article: 125366
On 9 5 , 2 47 , Paulo Dutra <paulo.du...@NOSPAM.NOSPAM> wrote: > Is your moduleA and moduleB listed in the PAO file? > If is not listed in the PAO, then it will not synthesized by XST. Xst > will give a warning that the moduleA and moduleB are black box components. > > Take a looke at you <proj_directory>/synthesis/opb2ip_bridge_0_xst.srp > This is the syntesis report file for your component. Look for warnings > regarding moduleA and moduleB. > > > > L. Schreiber wrote: > > Gabor schrieb: > >> On Sep 4, 7:37 am, "L. Schreiber" <l.s.rock...@web.de> wrote: > >>> Hello, > > >>> I created an ipcore opb2ip_bridge (with edk's wizard) interfacing the > >>> opb and added it to the edk reference design. So far, so good. While > >>> running generate bitstream, synthesis stage runs through, but > >>> implementation stage aborts immediately with ERROR:NgdBuild:604. > > >>> --------------------------- > >>> logfile excerpt: > >>> ERROR:NgdBuild:604- logical block > >>> 'opb2ip_bridge_0/opb2ip_bridge_0/USER_LOGIC_I/moduleA_0' with type > >>> 'moduleA' could not be resolved. A pin name misspelling can cause this, > >>> a missing edif or ngc file, or the misspelling of a type name. Symbol > >>> 'moduleA' is not supported in target 'virtex2p'. > > >>> ERROR:NgdBuild:604- logical block > >>> 'opb2ip_bridge_0/opb2ip_bridge_0/USER_LOGIC_I/moduleB_0' with type > >>> 'moduleB' could not be resolved. A pin name misspelling can cause this, > >>> a missing edif or ngc file, or the misspelling of a type name. Symbol > >>> 'moduleB' is not supported in target 'virtex2p'. > >>> --------------------------- > > >>> My userdefined ipcore is unitized in a hierarchy of vhdl modules like: > > >>> -opb2ip_bridge > >>> |-USER_LOGIC > >>> ||-moduleA > >>> ||-moduleB > >>> |||-moduleA > > >>> Could this be the reason for the error. If so, is there any > >>> configuration file, I would have to modify previously to make the > >>> synthesis/ implementation stage be aware that my ipcore is designed > >>> modular and how to resolve the symbol names. > > >>> Or what is the real problem and how do I have to solve it. > > >>> I'm working under edk and ise version 8 and latest service packs. The > >>> reference design is not the problem, because it's already working in > >>> other designs. > > >>> Thanks for you help. > >>> Greetings, Lars. > > >> I've seen something similar with COREgen modules in mixed language > >> designs. > > No, it's a vhdl only design. > > >> Is moduleB a black box? > > No, there are no black-box attributed instances inside the peripheral ip > > design. > > >> Do you have a .ngc file created for moduleB? > > No, it doesn't seem, that the edk has built ngc files for the submodules > > of the ip while executing "generate bitstreams" . Inside the > > implementations directory there is only a opb2ip_0_wrapper.ngc for the > > toplevel of my ipcore besides all the ngc files from the reference design. > > >> Is it in the project directory? > > My ip lies inside an external ip repository and is linked into the edk > > reference design project. > > >> If you answer yes to all three questions, then there is the > >> possibility > >> that ISE is looking for your .ngc code in another file such as > >> moduleB_0.ngc due to the hierarchy created. In that case copying > >> moduleB.ngc to moduleB_0.ngc can fix the problem. > > >> The same would apply to moduleA. > > >> HTH, > >> Gabor- - > > - - OK, I have corrected this error, I found the main problem is that the project can not find where the "black box" is, this means you should add your "black box" into the project library, and then rerun synthesize, lastly implement design.Article: 125367
On Oct 23, 5:27 pm, "David Spencer" <davidmspen...@verizon.net> wrote: > <MikeShepherd...@btinternet.com> wrote in message > > news:1evsh3ds7i44iqhrsc4kldthlo2vb0tul2@4ax.com... > > > > > Although it's not expressed in DRAM specs and you wouldn't want to > > rely on it, the effect of reducing refresh rate is to increase the > > access time. I'm not up-to-date with DRAM technology, but my > > experience with devices 30 years ago was that you could turn off > > refresh (and all other access) for 10s or more without losing the > > contents, provided you weren't pushing the device to its access time > > limits. > > > So, it's not impossible that reducing refresh rate would have a use > > (albeit outside the published device spec). But, as you suggest, it > > would help if he would just tell us what he's trying to do. > > > Mike > > Although that may well be the case for asynchronous DRAMs (because the > reduced charge in the memory cell capacitor would mean that the sense > amplifier took longer to register the state), this would not be the case for > SDRAM since this registers the outputs a fixed number of clocks after the > access starts. If the underlying access time increased by too much then the > data would just be wrong. For certain addressing patterns, the refresh can be eliminated alltogether, when the addressing sequence is such that all (used) memory cells are naturally being read, and thus refreshed, within the required time. Peter AlfkeArticle: 125368
On 24 Okt., 07:50, Peter Alfke <al...@sbcglobal.net> wrote: > On Oct 23, 5:27 pm, "David Spencer" <davidmspen...@verizon.net> wrote: > > > > > > > <MikeShepherd...@btinternet.com> wrote in message > > >news:1evsh3ds7i44iqhrsc4kldthlo2vb0tul2@4ax.com... > > > > Although it's not expressed in DRAM specs and you wouldn't want to > > > rely on it, the effect of reducing refresh rate is to increase the > > > access time. I'm not up-to-date with DRAM technology, but my > > > experience with devices 30 years ago was that you could turn off > > > refresh (and all other access) for 10s or more without losing the > > > contents, provided you weren't pushing the device to its access time > > > limits. > > > > So, it's not impossible that reducing refresh rate would have a use > > > (albeit outside the published device spec). But, as you suggest, it > > > would help if he would just tell us what he's trying to do. > > > > Mike > > > Although that may well be the case for asynchronous DRAMs (because the > > reduced charge in the memory cell capacitor would mean that the sense > > amplifier took longer to register the state), this would not be the case for > > SDRAM since this registers the outputs a fixed number of clocks after the > > access starts. If the underlying access time increased by too much then the > > data would just be wrong. > > For certain addressing patterns, the refresh can be eliminated > alltogether, when the addressing sequence is such that all (used) > memory cells are naturally being read, and thus refreshed, within the > required time. > Peter Alfke- Zitierten Text ausblenden - > > - Zitierten Text anzeigen - Sinclair ZX? at least some old Z80 homecomputers used refresh by video scan AnttiArticle: 125369
On Wed, 24 Oct 2007 07:15:08 -0000, Antti <Antti.Lukats@googlemail.com> wrote: >> For certain addressing patterns, the refresh can be eliminated >> alltogether, when the addressing sequence is such that all (used) >> memory cells are naturally being read, and thus refreshed, within the >> required time. >> Peter Alfke- Zitierten Text ausblenden - >> >> - Zitierten Text anzeigen - > >Sinclair ZX? >at least some old Z80 homecomputers used refresh by video scan Yes, and it's a completely ridiculous way to do it. The added cost of making frequent additional row accesses is far greater than the cost of the necessary refresh. A DRAM row is effectively a cache. When you access a row, you read the whole row into the DRAM's row buffer as a free side-effect, and can then make very fast column accesses to anly location in the row. It's preposterous to throw away that massive free bandwidth just to save yourself some refresh effort - unless you're trying to design a $80 home computer/toy in the early 1980s. In those days, the video buffer was a sufficiently large fraction of the overall DRAM that it was reasonable to lay out the video memory so that every row was automatically visited by the video scan, giving a refresh cycle every 20ms (16.7ms in the USA). That was out-of-spec for many DRAMs of the day (8ms refresh cycle) but in practice it worked in almost all cases - and the manufacturers of those computers had a shoddy enough warranty policy that they weren't going to worry about a handful of customers complaining about occasional mysterious memory corruption on a hot day. -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK jonathan.bromley@MYCOMPANY.com http://www.MYCOMPANY.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.Article: 125370
Hello All, I have a system which consists of many subsystems which will be connected to a common bus (this includes 16-bit data and 16-bit address buses). This system is FPGA-based and I want to control its work through GUI from PC host. Hence, a program on PC host has to know addresses of sunsystems in order to communicate with them. My question is - how my program will know the addresses of each subsystems of the system and how these addresses are assigned? So far, my host knows only addresses of it's external ports, i.e. serial and parallel, but has no idea about what could be addresses of subsystems on FPGA.Article: 125371
>For certain addressing patterns, the refresh can be eliminated >alltogether, when the addressing sequence is such that all (used) >memory cells are naturally being read, and thus refreshed, within the >required time. That happens in a couple of common cases... Running video refresh out of DRAM Running DSP code Running memory tests :) I once worked on a memory board that worked better (at least as measured by memory diagnostics) when the refresh was clipleaded out. (We had a bug in the arbiter.) -- These are my opinions, not necessarily my employer's. I hate spam.Article: 125372
> another thing, the actel FPGA is REALLY full, the utilization varied > between 81-99% > so i was positivly surprised that actel tools never had any issues > with the implementation > no matter how full the FPGA was. Antti, I've only skimmed this thread, but from what you've said the tools _did_ have a problem with the implrmentation, they just didn't know/report it. I'd rather have had the tool tell me it couldn't P&R the design than have to spend days debugging the faulty output. Nial.Article: 125373
On 24 Okt., 12:08, "Nial Stewart" <nial*REMOVE_TH...@nialstewartdevelopments.co.uk> wrote: > > another thing, the actel FPGA is REALLY full, the utilization varied > > between 81-99% > > so i was positivly surprised that actel tools never had any issues > > with the implementation > > no matter how full the FPGA was. > > Antti, > > I've only skimmed this thread, but from what you've said the tools > _did_ have a problem with the implrmentation, they just didn't > know/report it. > > I'd rather have had the tool tell me it couldn't P&R the design > than have to spend days debugging the faulty output. > > Nial. Hi Nial, well I also would have assumed that if the tools detect internal clock routing skew beyond where the FPGA defenetly will not operate they should report it, and not allow this P&R. Also the post-layout simulation did not show problems. well not with the shift register case. this same thing re-appeared in other part of the logic, where post-layout simulation also did show error. eh, just a hard-lesson (over 5 days of total struggle): for actel - dont belive even post-layout sim, force all signal that clock any FF to global nets. actel cell has limitation on connections so i tried to trick to save resources, and falled into a trap AnttiArticle: 125374
The way this post is formed in could invite non-subject post thats rude so I'll try to answer it "correctly" with some facts. Congratulations Vagant, you have just bough yourself one of the best Value-For-Money FPGA-Development boards that are out there today I think. You'll probably be good with it for many years to come if you are willing to learn. I also bought it when Digilent start selling it without the Microblaze Kit which made it a bargain. Of course just because it is called "Microblaze" there is no need to use it solely for that. You can use this board almost for anything as you can see at the demos shipped with it, stand-alone web-server, complete Microblaze-based Linux operating system. My suggestion would be that you start off by doing the classical Button/Led trick where you install ISE Webpack and by using Verilog or VHDL you setup each button to light a LED.. This is of course massive hardware overkill but at least you know your tools and that everythin is connected correctly. Don't start with LCD because this needs SPI- control and specific protocol which would be a litle messy to start with I think. Also skip the Rotation-device as this also can bring you some problems as a beginner.. I have ported the FPGA-64 (part of C-One project) for this Kit which implements a quite complete Commodore 64. just did it for fun and for experience. Another suggestions is to use the Windows tools as the USB-connection in PC-Linux can bring you some intial problems you don't need on your first testrun. Good luck with your kit!
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z