Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
"Leon" <leon355@btinternet.com> wrote in message news:3d4b6f41-b742-405d-87bd-47e2fb75ac1d@8g2000hse.googlegroups.com... "Not if you have four 400 MIPS cores on the chip, each with 64 bits of I/O, 64k of RAM, with 3.2 Gbit/s comms links between cores and 32 threads per core, with switching between threads in one clock. If the software is free, it is a very cost-effective solution, especially as the chips will be very cheap." Well, if you're already bought the MIPS, then you're correct, Leon, you might as well get some free peripherals out of it. This is certainly the case with PCs, where even a cheap box has a several GHz CPU and -- for 90+% of users -- has tons of free cycles sitting around that can be used for soft modems, sound card DSP, etc. On the other hand, for embedded systems the best solution isn't always so clear-cut. Look at everyone using FTDI (or similar) USB to RS-232 chips with some dirt cheap low-end microcontroller -- often this approach is cheaper overall than using a "USB microcontroller," especially if the product volumes are low so development cost is significant. Or look at John Larkin's boxes -- he uses an Xport to turn Ethernet back into serial, since (presumably) overall it's cheaper/more effective than having one of his guys sit down and figure out how to add an Ethernet stack to his 68k-family CPUs. Even though the stack itself is surely freely available somewhere, the integration time is still significant. ---JoelArticle: 135751
On 13 Oct, 13:21, "MMJ" <S...@aldrig.com> wrote: > > If you say what the specific bugs are, maybe I can offer some help or > > advice. > > One of the main problems are the poor performance of the debug interface in > the Mico32. A download speed of 20-30 kbit/sec is just to slow when > developing large applications. Is there any way to speed up this interface? Are you developing on Lattice's ECP2 board or your own custom board? Cheers, JonArticle: 135752
pfrinec@yahoo.co.uk wrote: > Thanks everybody for really useful ideas. > I think the best solution for me would be DOSFS with SPI core because > I need to write multiple image files to micro-SD card and later make > them available to PC users. > > Link to Trenz Electronic where you can purchase Micro-SD Adapter for > Xilinx PMOD Extension Slot: > http://shop.trenz-electronic.de/catalog/product_info.php?cPath=1_65&products_id=348 > (Online Shop -> FPGA/CPLD Boards -> Industrial Modules) If the sizes of the files are fixed, for example screen captures, you can create multiple files on the SD card using a PC and they will all occupy contiguous blocks. Finding the first block of each file in the directory is easy enough, but you can even save the start block addresses somewhere - for example in an unused physical block between MBR and the first partition. The FPGA will read this special "directory" and save data starting from the first block of each file. The trick is to never delete the files but just overwrite with the same size data. -Alex.Article: 135753
On 1 Oct, 15:29, Leon <leon...@btinternet.com> wrote: ... > I've booked for the Londonseminarnext month, participants get a dev > kit. I might as well get the other one I've reserved, as well. They > are good value. Did you get any confirmation of booking? I too have booked for one of the London seminars but have nothing in writing to confirm that a place has been allocated. -- JamesArticle: 135754
Leon wrote: > Not if you have four 400 MIPS cores on the chip, each with 64 bits of > I/O, 64k of RAM, with 3.2 Gbit/s comms links between cores and 32 > threads per core, with switching between threads in one clock. That's somewhat more on-chip resources and higher performance than we had at Ubicom, but it's also trying to do commensurately more sophisticated and higher performance functions in the software, so I don't expect it to actually work out any better than things did for Ubicom. > If the > software is free, it is a very cost-effective solution, especially as > the chips will be very cheap. Software is NEVER free, even when someone is giving it away. You'll pay for it one way or another. Eric From rich@example.net Tue Oct 14 14:51:32 2008 Path: flpi142.ffdc.sbc.com!flpi088.ffdc.sbc.com!prodigy.com!flpi089.ffdc.sbc.com!prodigy.net!bigfeed.bellsouth.net!news.bellsouth.net!cyclone1.gnilink.net!spamkiller.gnilink.net!gnilink.net!nwrddc02.gnilink.net.POSTED!dd653b87!not-for-mail From: Rich Grise <rich@example.net> Subject: Re: XMOS XC-1 kits are shipping User-Agent: Pan/0.14.2.91 (As She Crawled Across the Table) Message-Id: <pan.2008.10.14.22.51.25.263325@example.net> Newsgroups: comp.arch.embedded,comp.arch.fpga,comp.dsp,sci.electronics.design References: <a3424282-0c43-4930-b725-417520a417bb@l64g2000hse.googlegroups.com> <15fcf9a2-9c03-49de-adac-7351d6295c3b@j22g2000hsf.googlegroups.com> <7d16841f-3529-46e9-ada6-d770c31991ff@v28g2000hsv.googlegroups.com> <m3r66kgu3i.fsf@donnybrook.brouhaha.com> <3d4b6f41-b742-405d-87bd-47e2fb75ac1d@8g2000hse.googlegroups.com> <m3prm250j1.fsf@donnybrook.brouhaha.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit Lines: 10 Date: Tue, 14 Oct 2008 21:51:32 GMT NNTP-Posting-Host: 71.119.122.152 X-Complaints-To: abuse@verizon.net X-Trace: nwrddc02.gnilink.net 1224021092 71.119.122.152 (Tue, 14 Oct 2008 17:51:32 EDT) NNTP-Posting-Date: Tue, 14 Oct 2008 17:51:32 EDT Xref: prodigy.net comp.arch.embedded:297656 comp.arch.fpga:148615 comp.dsp:258603 sci.electronics.design:936844 X-Received-Date: Tue, 14 Oct 2008 17:51:34 EDT (flpi142.ffdc.sbc.com) On Tue, 14 Oct 2008 14:28:50 -0700, Eric Smith wrote: > > Software is NEVER free, even when someone is giving it away. You'll > pay for it one way or another. Dude, you gotta learn where to shop. ;-) Cheers! RichArticle: 135755
Rich Grise <rich@example.net> writes: > Dude, you gotta learn where to shop. ;-) I'm also going to have to get myself one of those prestigious example.net email addresses like you have. I hear that they provide really good spam filtering. :-)Article: 135756
Hi. I have a design with a Virtex 5 and a DDR2 memory. It seems like there is some problem when I read from the DDR2 memory and I was hoping that someone here might recognize the problem. Here is what happens. My base address for the memory is 0x88000000 I write following bytes (byte write) 0x11 to 0x88000000, 0x22 to 0x88000001, 0x33 to 0x88000002, 0x44 to 0x88000003. When I read from the memory it will look like (32 bits format): 0x88000000 11000000 0x88000004 00000000 0x88000008 00000000 0x8800000C 00000000 0x88000010 00223344 But sometimes it seems to work and then I try to reload the bit-file and then I get the same problem again. What I notice is that when it works I write i.e 0x11223344 and will read back 0x88000000 11223344 0x88000004 00000000 0x88000008 00000000 0x8800000C 00000000 0x88000010 00000000 If I reload the bit-file I will read back: 0x88000000 11000000 0x88000004 00000000 0x88000008 00000000 0x8800000C 00000000 0x88000010 00223344 So my conclusion is that the write part always seems to work, but it the read part that doesn't work. Could the be a problem with the IDELAYCTRL that is not calibrated? Thanks, NiklasArticle: 135757
On Oct 12, 8:10=A0pm, akineko <akin...@gmail.com> wrote: > Hello everyone, > > I would like to create a scheme to hook up an external CPU model to a > Verilog design. > I have already established a basic communication protocol to link > Verilog design to an external device. > So, it should be easy to link a CPU model to a Verilog design. I do that for YARI, a MIPS compatible processor. Any feature is first implemented and tested in the ISA level simulator (yarisim). Once that works, the feature is implemented in the RTL (Verilog) and the two models are co-simulated on the tests that were used for the ISA level simulator, as well as more substantial applications. The cosimulation model I use is decidedly simple: yarisim is run in a model in which it parses the trace output of the verilog simulation. This has the advantage of not depending on anything but the ability to $display() stuff in the RTL model (as opposed to horrible PLI hacks etc). For more detail, see http://thorn.ws/yari and/or http://www.vmars.tuwien.ac.at/php/pserver/extern/download.php?fileid=3D1547 TommyArticle: 135758
On Sep 23, 8:28=A0am, m <martin.use...@gmail.com> wrote: > > Seehttp://www.altera.com/literature/wp/wp-01034-Utilizing-Leveling-Tech= n... > > for more detail. > > OK, that makes sense. =A0Do the Arria parts have this capability as > well? Nope, they don't have dynamic OCT or read/write leveling circuitry. > Which parts can support a 533MHz DDR3 clock right now? The fastest (-2) Stratix III speed-grade supports 533 MHz DDR3, but as the datasheet says (see http://www.altera.com/literature/hb/stx3/stx3_siii5= 1008.pdf) fine-tuning of the DDR3 IP core for 533 MHz operation is not complete, so you should not go into production at 533 MHz DDR3 with the current (8.0) version of Quartus. Vaughn Altera [v b e t z (at) altera.com]Article: 135759
On 2008-10-14, 500milesaway <500milesaway@gmail.com> wrote: > begin > clk<='0'; > in_i1<="11111111"; > wait for 50 ns; > > clk<='1'; > in_i1<="00000001"; > wait for 50 ns; I think that you are being bitten by some sort of race condition here. This seems to be a common problem the first time you use a testbench with a post synthesis netlist. What you are essentially saying is that the clock and your input values change at more or less exactly the same time. Unfortunately simulation primitives (especially in post place and route simulation) usually have some sort of delay built in. While a VHDL language lawyer could probably take a look at the simulation libraries which is used and figure out exactly what is going on in your case I don't think that is very interesting. If you change it so that the input signals change slightly after the clockedge you should get a working post translate simulation. As in: clk<='1'; wait for 10 ns; in_i1<="00000001"; wait for 40 ns; (You should be able to use much less than 10 ns delay by the way.) /AndreasArticle: 135760
niklas_molin@hotmail.com wrote: > Hi. > > I have a design with a Virtex 5 and a DDR2 memory. > It seems like there is some problem when I read from the DDR2 memory > and I was hoping that someone here might recognize the problem. > > Here is what happens. > My base address for the memory is 0x88000000 > > I write following bytes (byte write) 0x11 to 0x88000000, 0x22 to > 0x88000001, 0x33 to 0x88000002, 0x44 to 0x88000003. > > When I read from the memory it will look like (32 bits format): > 0x88000000 11000000 > 0x88000004 00000000 > 0x88000008 00000000 > 0x8800000C 00000000 > 0x88000010 00223344 > > But sometimes it seems to work and then I try to reload the bit-file > and then I get the same problem again. > What I notice is that when it works I write i.e 0x11223344 and will > read back > 0x88000000 11223344 > 0x88000004 00000000 > 0x88000008 00000000 > 0x8800000C 00000000 > 0x88000010 00000000 > > If I reload the bit-file I will read back: > 0x88000000 11000000 > 0x88000004 00000000 > 0x88000008 00000000 > 0x8800000C 00000000 > 0x88000010 00223344 > > So my conclusion is that the write part always seems to work, but it > the read part that doesn't work. > Could the be a problem with the IDELAYCTRL that is not calibrated? > > Thanks, > Niklas Hi Niklas, Your question is quite broad and lacks details. - Which controller are you using? MIG-generated, your own, etc. - What SDRAM type is it? 4-burst length? - When you say base address, are you using an embedded processor? - Does it work in simulation? *g* Cheers, -PatArticle: 135761
> I write following bytes (byte write) 0x11 to 0x88000000, 0x22 to > 0x88000001, 0x33 to 0x88000002, 0x44 to 0x88000003. Are you somehow translating a flat byte address to a DRAM row and column address? What DDR row and column addresses and data mask signals are being generated? If your DRAM addressing is not aligned to row boundaries then you may get unexpected results. As PatC says, need more info!Article: 135762
On Oct 15, 8:24=A0am, Rob <BertyBoos...@googlemail.com> wrote: > > I write following bytes (byte write) 0x11 to 0x88000000, 0x22 to > > 0x88000001, 0x33 to 0x88000002, 0x44 to 0x88000003. > > Are you somehow translating a flat byte address to a DRAM row and > column address? What DDR row and column addresses and data mask > signals are being generated? > If your DRAM addressing is not aligned to row boundaries then you may > get unexpected results. > > As PatC says, need more info! One more question... Do you have this working in functional simulation?Article: 135763
> If your DRAM addressing is not aligned to row boundaries then you may > get unexpected results. Sorry, I mean't "burst boundaries"Article: 135764
Hi, can someone who has used Xilinx's MIG DDR2 controller help? I am trying to simulate the DDR2 controller with memory MT4HTF3264HY-53ED3 memory model generated by MIG 2.0 (Xilinx ISE 10.1 SP3). I had set the burst length to 8. I need to have good knowledge of the timing of the controller as I need to optimize its usage for an embedded system. In the RTL simulation, I write to DDR2 RAM model at 133 MHz clock frequency from memory locations 0x0 to 0x400 in 1 burst via the application interface, and then observe the timing on the DDR2 bus. The DDR2 model is the one that I generated from MIG2.0. Here is what I noticed: 1) a precharge occured after writing to memory location 0x260. this is followed by a refresh and activate. To my understanding, a precharge and subsequent activate should happen only if i am accessing an address greater than 1024 or 0x3FF, since the column width of the RAM is 10 bits. Morever, I am doing a write burst to sequential addresses 0 to 0x400. 2) TRAS which is delay between active and precharge is supposed to be 40 ns. but from the above point, this does not seem to be much longer. 3) Also, i thought 7800 us is the interval between 2 refresh. how come it refreshes so much earlier? isn't this very inefficient? did i miss out anything? kindly help and advice. thanks in advance, ChrisArticle: 135765
I've just received my XC-1 kit and have started playing with it. It comes with a nice demo that does interesting things with the LEDS, buttons and speaker, uses a UART, and includes a simple game. The tools are open source, and work OK from the command line. The IDE uses Eclipse, and I can't seem to get an XC program to build properly, although a C program is OK. Support on the Xlinkers forum is very good; I've asked a few questions there and an XMOS engineer has answered them in a few minutes. LeonArticle: 135766
On 14 Oct, 22:26, James Harris <james.harri...@googlemail.com> wrote: > On 1 Oct, 15:29, Leon <leon...@btinternet.com> wrote: > ... > > > I've booked for the Londonseminarnext month, participants get a dev > > kit. I might as well get the other one I've reserved, as well. They > > are good value. > > Did you get any confirmation of booking? I too have booked for one of > the London seminars but have nothing in writing to confirm that a > place has been allocated. > > -- > James No, I didn't either. I had a problem with their payment system and managed to make two bookings. I phoned them and they confirmed that they'd received them and cancelled one of them for me. You could try phoning them if you aren't sure. LeonArticle: 135767
(comp.dcom.lans.ethernet added) Fred wrote: > I have the IEEE standard 802.3 documents but find they're not the most > readable in the world. I find they confuse issues by having many > standards in one document. > Is there a more readable document around which is specifically related > to 100Base-TX? > Does any one know of example waveforms I could include in a test > bench, so I can check the scrambling and descrambling of data? I am > aware of a very good site for 10Base-T not for for fast ethernet. comp.dcom.lans.ethernet is probably the place to ask. Otherwise, I recommend Rich Seifert's book: http://search.barnesandnoble.com/Gigabit-Ethernet/Rich-Seifert/e/9780201185539 Yes, despite the name he discusses previous ethernet standards. -- glenArticle: 135768
> If the sizes of the files are fixed, for example screen captures, you > can create multiple files on theSDcard using a PC and they will all > occupy contiguous blocks. Finding the first block of each file in the > directory is easy enough, but you can even save the start block > addresses somewhere - for example in an unused physical block between > MBR and the first partition. The FPGA will read this special "directory" > and save data starting from the first block of each file. The trick is > to never delete the files but just overwrite with the same size data. > > -Alex. Interesting solution. Once I have low level drivers for micro-SDcard, I'll give it a try. Thanks, Alex.Article: 135769
On 14 Okt., 15:57, Leon <leon...@btinternet.com> wrote: > On 13 Oct, 20:42, Eric Smith <e...@brouhaha.com> wrote: > > > > > > > Bob wrote about XMOS: > > > > They seem far too fixated on doing everything in software, things lik= e > > > ethernet where there is no point shoveling bytes in software if > > > hardware can take care of it. > > Leon wrote: > > > They are supplying free libraries for all the usual peripheral > > > functions. Doing stuff like that in software is much cheaper than > > > using hardware, and easier in many ways. > > > Been there, done that, and it's not cheaper or easier when you > > consider the overall system cost impact, not just the "benefit" of > > leaving out the hardware block. =A0That was the path Scenix/Ubicom went > > down, calling it "virtual peripherals", and it was not very > > successful. =A0Ubicom has since added hardware for Ethernet, USB, > > etc. to their most recent parts. =A0The reality is that a hardware > > Ethernet MAC costs less than the total system cost impact of the > > software alternative. > > > Eric > > Not if you have four 400 MIPS cores on the chip, each with 64 bits of > I/O, 64k of RAM, with 3.2 Gbit/s comms links between cores and 32 > threads per core, with switching between threads in one clock. If the > software is free, it is a very cost-effective solution, especially as > the chips will be very cheap. XMOS found a way to make their chips so cheap, that they can do everything cheaper in software than all others in hardware? Why don't they sell this revolutionary technology to semiconductor / FPGA companies then? They just wouldn't tell anyone about it and the same big Market would use it without knowing, but at lower prices.Article: 135770
On Oct 13, 10:36 pm, uraniumore...@gmail.com wrote: > Thanks for the help. So, I can insure myself that the FPGA is counting > as long as the FPGA is always driven by some input square wave and not > floting to some external squivering ? How would I verify that it is > actually counting (despite the fact that the count value got to the > max) ? Any ideas ? One thing you could do is counter periods of a faster clock between the puleses to measure the inter-arrival time. Any substantial irregularities probably indicate glitches (or an unstable source) You would then need a way to dump that data out. You can do that manually through the serial port but it takes a fair amount of coding. Does the chipscope logic analyzer work on the webpack tools? If so that would be a good way to monitor the data from this type of experiment (you could also just clock the analyzer with a faster clock and look at the input and expect to see a square wave, or have it look at your count register)Article: 135771
On Oct 15, 5:33=A0pm, pfri...@yahoo.co.uk wrote: > > If the sizes of the files are fixed, for example screen captures, you > > can create multiple files on theSDcard using a PC and they will all > > occupy contiguous blocks. Finding the first block of each file in the > > directory is easy enough, but you can even save the start block > > addresses somewhere - for example in an unused physical block between > > MBR and the first partition. The FPGA will read this special "directory= " > > and save data starting from the first block of each file. The trick is > > to never delete the files but just overwrite with the same size data. > > > -Alex. > > Interesting solution. Once I have low level drivers for micro-SDcard, > I'll give it a try. > Thanks, Alex. The best solution is NOFAT(iR) IMHO http://groups.google.com/group/antti-brain/browse_thread/thread/f5724776440= cc1bc no wonder as it my solution :) nofat.exe is available on request, but the idea is enough already, it explains the details close enough AnttiArticle: 135772
Hi newsgroup, can somebody tell me whether Altera uses a PLL to adjust the setup/ hold timing of their 32bit PCI master/target core ? I am curious about it because other FPGA vendors do that. Doing so it not always easy to find the correct phase of the PLL. Thank you for your opinion. Rgds AluPinArticle: 135773
sorry, some correction: "to adjust the setup / clock to out timing"Article: 135774
On Oct 15, 2:30=A0am, chrisde...@gmail.com wrote: > Hi, > > can someone who has used Xilinx's MIG DDR2 controller help? > > I am trying to simulate the DDR2 controller with memory > MT4HTF3264HY-53ED3 memory model generated by MIG 2.0 (Xilinx ISE 10.1 > SP3). I had set the burst length to 8. I need to have good knowledge > of the timing of the controller as I need to optimize its usage for an > embedded system. > > In the RTL simulation, I write to DDR2 RAM model at 133 MHz clock > frequency from memory locations 0x0 to 0x400 in 1 burst via the > application interface, and then observe the timing on the DDR2 bus. > The DDR2 model is the one that I generated from MIG2.0. > > Here is what I noticed: > 1) a precharge occured after writing to memory location 0x260. this is > followed by a refresh and activate. > > To my understanding, a precharge and subsequent activate should happen > only if i am accessing an address greater than 1024 or 0x3FF, since > the column width of the RAM is 10 bits. Morever, I am doing a write > burst to sequential addresses 0 to 0x400. > > 2) TRAS which is delay between active and precharge is supposed to be > 40 ns. but from the above point, this does not seem to be much longer. > > 3) Also, i thought 7800 us is the interval between 2 refresh. how come > it refreshes so much earlier? isn't this very inefficient? > > did i miss out anything? kindly help and advice. > > thanks in advance, > Chris 3) The average refresh interval is 7.8 usec.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z