Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi, I have many years of experience on DSP/embedded system and FPGA/CPLD design. I'm in Vancouver and looking for jobs. If the company you work for has any opening or you know any job opportunities that my skills fit, and your information finally leads to my job, I will be glad to share half of my first two months' net income to you. I guarantee it. I will be glad to relocate to other cities in Canada. It is double wins, for me, I got a job and sharing my half of salary to you is absolutely no problem, and for you, that position you know will doom to be filled by someone finally, why don't you do me a favor? I appreciate any reply. My email address is: wagain@hotmail.com. Thank you for your time. AlbertArticle: 40126
Hi folks, Probably a trivial question, but my research so far has not found a clear answer. We have the various Xilinx design tools, and the Modelsim software, and have designed and synthesised a number of image processing algorithms for various FPGAs. Our approach is to pipe 8-bit pixel values in with an externally supplied data clock. What we'd like to do is convert a 8-bit grayscale image file into a bit stream that we can somehow import into the simulator, record the output bitstream and convert it back into something we can look at, to see if the design has performed as expected. I'm pretty new to this, but surely there is some standard waveform format that we can import into modelsim, saying "Put this data onto these pins, once value per clock tick". All I need is details on the file format, and how to get Modelsim to import it. I can write the software to convert our images into whatever format is necessary. Any help would be greatly appreciated. Thanks in advance, John -- Dr John Williams, Postdoctoral Research Fellow Queensland University of Technology, Brisbane, Australia Phone : (+61 7) 3864 2427 Fax : (+61 7) 3864 1517Article: 40127
On Thu, 28 Feb 2002 16:50:13 +1000, John Williams <j2.williams@qut.edu.au> wrote: >Hi folks, > >Probably a trivial question, but my research so far has not found a >clear answer. > >We have the various Xilinx design tools, and the Modelsim software, and >have designed and synthesised a number of image processing algorithms >for various FPGAs. Our approach is to pipe 8-bit pixel values in with >an externally supplied data clock. > >What we'd like to do is convert a 8-bit grayscale image file into a bit >stream that we can somehow import into the simulator, record the output >bitstream and convert it back into something we can look at, to see if >the design has performed as expected. > >I'm pretty new to this, but surely there is some standard waveform >format that we can import into modelsim, saying "Put this data onto >these pins, once value per clock tick". All I need is details on the >file format, and how to get Modelsim to import it. I can write the >software to convert our images into whatever format is necessary. > >Any help would be greatly appreciated. > >Thanks in advance, > >John Actually there is not. You have to write what's called a test-bench to accomplish what you want. Regardless of the HDL you're using, what you do is to instantiate the block you need to test (device under test) and put the code which reads the file and drives the IOs of the DUT with the contents of the file next to it. A very basic (and buggy as heck) verilog test-bench makes the assumption that you're dealing with 8 bit per pixel pictures of 32x32 in size (1024x8 data) which is scanned left to right and top to bottom follows: module top; reg [7:0] pix [1023:0]; reg [7:0] datain; wire [7:0] dataout; reg clk; reg [9:0] ptr; integer fout; DUT udut(clk, datain, dataout); initial begin readmemh(file, pix); end always @(posedge clk) begin datain <= pix[ptr]; ptr <= ptr + 1; end always @(posedge clk) fdisplay(fout, "%X", dataout); endmodule hth, Muzaffer Kal http://www.dspia.com DSP algorithm implementations for ASIC/FPGA systemsArticle: 40128
Hello! I would like to use the rloc for a FDCE component with virtex device (soft is leonardo + alliance on win2000). It does not work. The lut mapping is ok, and after PAR logic is placed in respect to the rloc, but not the flip flop. Does anyone knows what could be the reason? (If I do LOC on the component it works!) Here is the code used (the same thing as suggested in few papers that all look the same) attribute xc_uset: string; attribute xc_loc: string; attribute xc_uset of U0_0: label is "SET1"; attribute xc_uset of U0_1: label is "SET1"; attribute xc_uset of U0_2: label is "SET1"; attribute xc_rloc of U1_0: label is "R0C0.S1"; attribute xc_rloc of U0_1: label is "R0C0.S1"; attribute xc_rloc of U0_2: label is "R0C0.S1"; begin -- Instances of flip-flops U0_1: e_clear port map(m(0),I_Inhibit,reset(0)); U0_2: e_mask port map(Image(0), mask(0), m(0)); U0_0: FDCE port map(CE=>I_Init,D=>I_Init,C=>Clk,CLR=>reset(0),Q=>mask(0)); Thanks!Article: 40129
Not wishing to become netcop, or to get into a long (or even short debate) on the subject) but can we avoid top=posting please? Reasons why at http://www.cs.tut.fi/~jkorpela/usenet/brox.html Anyway, back to our scheduled content... Regarding inverted reset signals in MAX3000 devices kayrock66@yahoo.com (Jay) writes: > Off the top of my head, 2 options... > > 1) You can invert it with one pass through your part then bring it > back in and make your tools happy. > You can invert it inside the part and still connect it to the FF internally. The DFFs in the MAX3000 are active low reset according to figure 2 of the datasheet (from the global clear anyway). There's a little bubble on the border of the DFF symol. <snip Jay's eminently sensible synchronous reset suggestion> HTH, Martin -- martin.j.thompson@trw.com TRW Conekt, Solihull, UK http://www.trw.com/conektArticle: 40130
John Williams <j2.williams@qut.edu.au> writes: > Hi folks, > > What we'd like to do is convert a 8-bit grayscale image file into a bit > stream that we can somehow import into the simulator, record the output > bitstream and convert it back into something we can look at, to see if > the design has performed as expected. > What I did was write some functions that can read ASCII PGM format, then I convert my images to that format and the testbench reads them in a pixel at a time and places the data on the appropriate pins. Cheers, Martin -- martin.j.thompson@trw.com TRW Conekt, Solihull, UK http://www.trw.com/conektArticle: 40131
kayrock66@yahoo.com (Jay) wrote in message news:<d049f91b.0202261024.434c568f@posting.google.com>... > Hi Russel, > > There are a couple ways that I can think of that might allow you do do > what you want. The synthesizer is doing logic reduction and changing > the names and meanings of signals in this process. > > 1) Put your clock generator in its own module definition, then turn on > Leonardo's "preserve hierarchy" button, to maintain hopefully the port > names and meaning. > > 2) Generate a seperate EDIF for your clock generator, and wire it into > your design in Quartus using the name you like for the clock net. > > 3) Make the clock a primary I/O of your EDIF so Leonardo can't rename > it. In the edif file, clk1M25 is a net: (net clk1M25 (joined (portRef Q (instanceRef clk_sys_reg_clk32 )) (portRef CLK (instanceRef reg_Q2 )))) However, i think nets don't appear in the node finder. I can get instances such as above: clk_sys_reg_clk32. I found a bug in quartus2 where a derived clock (in the timing settings) can't be set with a division ratio greater than 100. > Having said that, in general, you don't generate clocks in FPGA's from > the output of counters. You usually would use the outputs of your > counters as ENABLES for flops and clock everything directly from that > high frequency net. Burns power, limits speed, but makes the tools > happy. > > Regards Hmm. That got me thinking. I never really did like deriving clocks from counter outputs, because they're time-delayed compared to the external master clock. Wonder what other clock dividing arrangements are commonly used? I guess you mean something like: proc: process(sysclk) -- 40MHz (25ns) begin if reset='1' then Q1<='0'; elsif sysclk'event and sysclk='1' -- 40MHz (25ns) then if clk1M25='1' -- enable, 1.25MHz pulse with 25ns width then Q1<=D1; end if; end if; end process; None of the books i have say anything about proper ways of deriving sub clocks:( > rjshaw@iprimus.com.au (russell) wrote in message news:<c3771dbf.0202260038.5876fcc3@posting.google.com>... > > Hi all, > > > > I've just started using web Quartus-2 to compile > > an edif file from leonardo spectrum (acex 1k30). > > > > In the edif file (and in the vhdl code), i have a > > signal called clk1M25 (an internal node). I wanted > > to set a timing constraint on it in relation to the > > clock input it is derived from (thru a counter circuit). > > > > However, clk1M25 doesn't appear anywhere using > > the node-finder search box. How can i find it?Article: 40132
True, but what if you want to get tricky and have one flip flop latch on the rising edge and another latch on the falling edge? This seems to work, but it does whine about the clock not being global. I haven't tried it, but I was wondering if you could invert the clock and then feed it through a global buffer (assuming you haven't used all the global buffers). Don't know if that'd work or not. > Regarding inverted reset signals in MAX3000 devices > kayrock66@yahoo.com (Jay) writes: > > Off the top of my head, 2 options... > > > > 1) You can invert it with one pass through your part then bring it > > back in and make your tools happy. > > > > You can invert it inside the part and still connect it to the FF > internally. > > The DFFs in the MAX3000 are active low reset according to figure 2 of > the datasheet (from the global clear anyway). There's a little bubble > on the border of the DFF symol. >Article: 40133
Hello, I am using ISE 4.1 SP3 with a Spartan-II. My VHDL design has 3 processes: Process A: Runs at 33MHz to fill a 16 location FIFO with 8-bit data samples and then keep it full. Process B: Runs at 33MHz to take data from the FIFO when told to and supply it to process C via a register. Process C: Runs at 425kHz and sends FIFO 8-bit data samples to a DAC bit-serially then asks process B for next piece of data. Process A is fed with data via a Visual C++ app (the Spartan-II is mounted on a PCI board) which is synchronised with the FPGA using an interrupt pin that the FPGA can assert and the C++ can read. I have used this system for many designs with no trouble (none of them had multiple clock domains however). The problem here is that the design is getting stuck in state for no apparent reason! (i.e. the C++ hangs waiting for the interrupt pin!). The system works for a random number of samples (between 28 and 33 it seems) and then gets stuck in state. This is very strange because it means that my protocols do work. The weird thing is that I put a piece of debug code in another state to send a signal out to a pin to probe, I ran the flow again to get a bitstream and the system ran perfectly for all 75001 samples I am using! The debug code was "Debug <= '1'". Then I enabled clock DLLs using the BUFGDLL component and it hangs again! Previously, I had it working perfectly using the clock DLLs but without a FIFO (i.e. 1 sample at a time from C++ to FPGA to DAC) but I got some stutters hence I introduced the FIFO. In that design I also had hanging problems but after I rejigged my protocol in my VHDL state machines it worked perfectly. It seems that seemingly random changes of VHDL make or break the system. I guess it must be to do with my 2 different clock rates but that is the way it has to be. I am at a loss - anyone any ideas? Thanks for your time, KenArticle: 40134
Hello all, I have a module that accepts 16 bit words at 155MHz and I want to store then in an 128x32 BRAM. I am going to use a DLL (in a Virtex-E FPGA) to divide by 2 the 155MHz clock as this frequency seems to be pretty high to write in the BRAM. So far I think 2 processes are enough to do my job, one operating @ 155MHz to accept the 16-bit data and store them in a 32-bit register and the another one @ 75MHz to write the content of the 32-bit register in the BRAM. I am thinking to assign the BRAM's signals (ENABLE,WRITE,ADDRESS,DATA_IN) in the falling edge of the 75MHz clock, the main reason for this is the setup time of the BRAMs signals (in this way the address,data are 6 ns before next rising edge of the clock where BRAM samples its inputs). My question now :) , if one process uses the falling edge of one clock does this causes problems to other processes in the design , e.g to processes that use a different clock or to processes using the rising edge of the same clock? Best Regards, Harris.Article: 40136
Had similar problems myself with a XCV400. I just assumed the routing algorithm was different and just doesn't work as well with my particular design on the 400. I've stuck with 3.3.8 for the 400 and use 4.1 with newer parts.Article: 40137
Ken Mac wrote: > My VHDL design has 3 processes: > > Process A: Runs at 33MHz to fill a 16 location FIFO with 8-bit data > samples and then keep it full. > Process B: Runs at 33MHz to take data from the FIFO when told to and > supply it to process C via a register. > Process C: Runs at 425kHz and sends FIFO 8-bit data samples to a DAC > bit-serially then asks process B for next piece of data. > Process A is fed with data via a Visual C++ app (the Spartan-II is mounted > on a PCI board) which is synchronised with the FPGA using an interrupt pin > that the FPGA can assert and the C++ can read. > > I have used this system for many designs with no trouble (none of them had > multiple clock domains however). The problem here is that the design is > getting stuck in state for no apparent reason! (i.e. the C++ hangs waiting > for the interrupt pin!). Sounds to me like you have a problem crossing clock domains. > The system works for a random number of samples (between 28 and 33 it seems) > and then gets stuck in state. This is very strange because it means that my > protocols do work. Not strange at all. Suppose we have two registers in one clock domain sampling a signal from the other clock domain. If both get the same value for the next clock, the logic works. If only one gets the value, the logic hangs. > The weird thing is that I put a piece of debug code in another state to send > a signal out to a pin to probe, I ran the flow again to get a bitstream and > the system ran perfectly for all 75001 samples I am using! The debug code > was "Debug <= '1'". Different placement, different routing, different timing, different odds of failure. Might work well at 25C, and fail like above at 28C. > Then I enabled clock DLLs using the BUFGDLL component and it hangs again! Different timing, different odds of failure. > Previously, I had it working perfectly using the clock DLLs but without a > FIFO (i.e. 1 sample at a time from C++ to FPGA to DAC) but I got some > stutters hence I introduced the FIFO. > > In that design I also had hanging problems but after I rejigged my protocol > in my VHDL state machines it worked perfectly. > > It seems that seemingly random changes of VHDL make or break the system. I > guess it must be to do with my 2 different clock rates but that is the way > it has to be. > > I am at a loss - anyone any ideas? First, sync the 425KHz to 33 MHz, then edge detect, and then use the edge detected clock 425 for a clock enable for process C. Code fragments: process(clk33) begin if rising_edge(clk33) then synslow <= clk425; synslow2 <= synslow; synslow3 <= synslow2; en425 <= synslow2 and not synslow3; end if; end process; processc: process(clk33) begin -- was (clk425) if rising_edge(clk33) if en425 = '1' then -- was rising_edge(clk425) .... The reason this (hopefully!) will solve your problem is that almost all logic will be running on a single clock. While there is a chance that synslow will not correctly clock in clk425 on rising or falling edge ("go metastable"), synslow2 is much less likely to fail (As mean time between failures >> age of universe), and en425 even less so. Also, you could synchronize all control signals between the two processes. More complex. -- Phil HaysArticle: 40138
================================================================ Newsgroup: comp.arch.fpga ================================================================ At 10am EST Thurs Feb 28 2002 there was a change to the article numbers in this group. If you connected and this is the first or only article you see, you can read previous articles by using "show all articles" or resetting the group. There is a gap of 4434 articles, so you want to request download that many plus however many you expected to see. You will only actually get the usual number of posts since you last read the group. That's all there is to it, a one time odd number of messages. This is part of our news infrastructure upgrade to make usenet better for all SCB/PI customers For more information you can go to http://newsgroups.news.prodigy.com, our user help site. We do recommend that all Usenet readers check the announcements in prodigynet.help.tech.newsgroups, even if you don't have time to follow the discussions. Please post any questions to that group, since it is read by the special contributors and will be far faster than email to anyone. -- SBC/Prodigy news admin <news@prodigy.net>Article: 40139
Hi there, to see a posting about PCI books was very interesting for me, because we too have to decide, what PCI book to buy. We are currently developing FPGA/ASIC logic to connect to PCI and also software drivers (linux by now). We now have to decide whether to buy "PCI System Architecture, 4th Ed. and PCI-X System Architecture" (both) or "PCI and PCI-X Hardware and Software, 5th Ed." Any comments on those (three) books ? Thanks. MatthiasArticle: 40140
Thanks very much for the detailed suggestion Phil. I will give it a go and let you know what happens! KenArticle: 40141
I suggest you grab pencil and paper and do a clock-by-clock timing analysis. You will find that your clock-speed reduction buys you nothing, unless you also double-buffer the data. One of your words arrives nice and early, the other one late. However you clock the BRAM, one of the two words has the same old short set-up time... Double-buffering would help. But Ray has mentioned some neat tricks to avoid the long set-up time of the control inputs. I will get back with more constructive notes. "Gotta run" Peter Alfke =================== "H.L" wrote: > Hello all, > > I have a module that accepts 16 bit words at 155MHz and I want to store then > in an 128x32 BRAM. I am going to use a DLL (in a Virtex-E FPGA) to divide by > 2 the 155MHz clock as this frequency seems to be pretty high to write in the > BRAM. So far I think 2 processes are enough to do my job, one operating @ > 155MHz to accept the 16-bit data and store them in a 32-bit register and the > another one @ 75MHz to write the content of the 32-bit register in the BRAM. > I am thinking to assign the BRAM's signals (ENABLE,WRITE,ADDRESS,DATA_IN) in > the falling edge of the 75MHz clock, the main reason for this is the setup > time of the BRAMs signals (in this way the address,data are 6 ns before next > rising edge of the clock where BRAM samples its inputs). My question now :) > , if one process uses the falling edge of one clock does this causes > problems to other processes in the design , e.g to processes that use a > different clock or to processes using the rising edge of the same clock? > > Best Regards, > Harris.Article: 40142
"Phil Hays" <spampostmaster@attbi.com.com> wrote in message news:3C7E50A3.E033467E@attbi.com.com... Elegant solution Phil, well done! Regards Phil C -- Posted via Mailgate.ORG Server - http://www.Mailgate.ORGArticle: 40143
"Peter Ormsby" <faepete.deletethis@mediaone.net> schrieb im Newsbeitrag news:5whf8.9908$Or3.1030973@typhoon.mn.ipsvc.net... > > Also, I dont think it is commercial sensefull, to "waste" such a big FPGA > Falk, Peter, > > Look, just because it isn't easy to implement something in a Xilinx part > doesn't mean that it's not a valid solution. There's a hint of some uneven > treatment going on when suggesting that someone looking at Stratix is > dreaming while comments about an Virtex device with 3.125 Gbps HSSI are > dropped all over this newsgroup without any questions being raised. ??? Sorry but I can get your point. Did I say anything against Stratix? No, just that I think it is no good to waste such abig FPGA, Xilinx or Altera. > The Stratix design tools are here (Synplicity, Leonardo Spectrum, Quartus > II, Modelsim, App Notes, White papers, etc.). The devices will be here > before most design cycles starting today will require parts. The 300 MHz, WHEN? The sentence above is very uncertain. We WILL see when the stratix hit the market (and not just a few es-parts) > 2Mb ROM fits easily into a medium sized Stratix device. What, exactly, was Yes. > wrong with my suggestion that Antonio take a look at the part? He's looking Nothing. > for a solution - I offered him one that may or may not work for him (that's > his decision to make) - and I get jumped on by the two of you. I'll say Sure, you offered A solution. Peter has his worries about availibility. I have worries about the commercial sense of this solution. Just saying "I need 4096x512 Bit ROM" doesnt say too much. A main question wold be. "How fast must you acces this ROM". Maybe its just a few MHz, so storiing this inside a 250MHz+ ROM IS a waste of silicone and MUCH money. > this again: Just because it isn't easy to implement something like this in > a Xilinx part doesn't mean it's not a valid solution. There's more than one > programmable logic vendor out there (and more than just two, too) despite Sure, there are. And there are also good devices, that perform better than Xilinx parts. This is not a problem, also not for the Xilinx V.I.P.s. > what you might think from reading this group. -- MfG FalkArticle: 40144
"Brady Gaughan" <bgaughan@aircom.com> schrieb im Newsbeitrag news:25306d63.0202271438.72b3f04f@posting.google.com... > We have an upcoming design that will need to support > LVDS point-to-point and point-to-multipoint. So we would > like to have the programmable LVDS terminations in the > FPGA. So far, the only device currently shipping that > I have seen this in is the Orca Series 4. Any other > devices support this? Xilinx doesn't seem to. Altera > will with the Stratix series, but that may be a while. Looks that you missed a lot in the datasheets. Many families support LVDS. For Xilinx:: Spartan-IIE, Virtex-E and Virtex-II For Altera: Mercury, Apex, Apex-II (maybe others too, I dont know) -- MfG FalkArticle: 40145
I need not say anything more... Victor "Jesse Kempa" <kempaj@yahoo.com> wrote in message news:95776079.0202271641.2385afd8@posting.google.com... > I can help elaborate on this one. I've seen both kits, and have been > won over by Nios myself. > > A coworker of mine ordered his MicroBlaze kit last year and it took > over two months to show up. While I found that the Insight development > board was a pretty good softcore CPU development platform (as is the > Nios dev board), the MicroBlaze product itself is a bit (to say the > least) lacking. After receiving the kit and opening it up we found no > MicroBlaze CD! It showed up in a separate mailing on a generic CD-R, > as if it were burned on some engineer's cubicle CD burner. > > These tidbits aside, the MicroBlaze design flow is entirely script > based - without any GUI. Nios is also script based, but it is all > abstracted in their system builder software. In order to get a basic > m-blaze system with CPU and a couple peripherals (UART, general > purpose IO), a couple of pages of typing are required to instantiate > everything! One goof up and no system for you! The Nios kit GUI lets > you make an equivalent system in seconds. > > -- Jesse Kempa > > emanuel stiebler <emu@ecubics.com> wrote in message news:<3C7D300C.3B5534A1@ecubics.com>... > > Victor Schutte wrote: > > > > > > I have given up trying to get hold of it. > > > > Care to elaborate a little further ? > > > > > I am now hooked on NIOS ver 2.0. > > > Excellent IP. > > > > I'll check ;-) > > > > cheers & thanksArticle: 40146
I'm trying to pitch that my client use Synopsys Design Compiler instead of an FPGA specific synthesizer from another vendor since his Xilinx Vertex 2 FPGA is a proto for a standard cell part. The clock speed isn't important, verification of the tool flow and design database is. The problem I'm running into is that the Design Compiler output uses almost 200% the LUTs compared to the purpose built FPGA synthesizer. So the logic will no longer fit the proto board. Mini Example: Design compiler: 1760 LUTS FPGA synthesizer: 824 LUTS Design compiler synthesizes to cells like AND2, OR2, AND4, etc whereas the FPGA specific tool maps directly to special LUTs custom made for the logic required like LUT_AB5A and LUT_67FE, etc. Now I figured the Xilinx mapper would be smart enough to "map" the Design Compiler AND2, OR2, etc, into more compact LUT_ABCD and LUT_6534 type cells but just seems to be doing a 1 for one map with no optimization. It appears that Xilinx did not write the mapper optimization (option -oe) for the recent products Vertex E/2 an Spartan 2 in effect giving up support for Design Compiler. Can any one else comment on this? It seems crazy that I can't use the old man of sythesis (Design Compiler) at $100k seat anymore. BTW- Altera DOES still do map optimization on Design Compiler EDIF files.Article: 40147
Rick, http://www.xilinx.com/ise/marketing/index.htm (Just ignore that it says 'marketing' in the link...) Homann -- Magnus Homann, M.Sc. CS & E d0asta@dtek.chalmers.seArticle: 40148
Brady, Any reason why the receive termination resistor MUST be inside as opposed to outside? I agree that it is more convenient, nicer, and the short stubs to the resistors cause a slight degradation in the signal integrity above 300 MHz, 600 Mbs. It doesn't prevent the Virtex II designs from running reliably at the data sheet 420 MHz, 840 Mbs rates, however. The whole reason for DCI in Virtex II on single ended IOs is to have improved SI, and lessen the pcb layout burden, so we can agree that internal resistors are usually better (but not always -- as the power has to go somewhere, and the part ends up getting hotter). Any other reasons? It is always nice to know what the customer concerns are. One other question, since there is no such thing as LVDS point to multi-point (it is not a standard), and we are able to use external resistors at both the drivers and receivers to create a "buss LVDS" solution (but you pay in terms of signal integrity, and the speed is less as a result): how does having an internal LVDS receive termination help accomplish a point to multi-point arrangement? Austin Brady Gaughan wrote: > We have an upcoming design that will need to support > LVDS point-to-point and point-to-multipoint. So we would > like to have the programmable LVDS terminations in the > FPGA. So far, the only device currently shipping that > I have seen this in is the Orca Series 4. Any other > devices support this? Xilinx doesn't seem to. Altera > will with the Stratix series, but that may be a while. > > Thanks for any information, > > Brady Gaughan > Airnet CommunicationsArticle: 40149
Hi, In order to get that system working you could only copy one of the predef= ined example and change a few lines, not several pages. I'm not sure if I want to into all the difference between MicroBlaze and = NIOS without maybe stepping into marketing territory. And I'm sure that you didn't try the latest MDK 2.1 of MicroBlaze, that i= s a much improved version over the 1.9 The normal distribution of MDK is over the internet. Xilinx will some also come out with some GUI driven system generation as = well. G=F6ran Bilski Jesse Kempa wrote: > I can help elaborate on this one. I've seen both kits, and have been > won over by Nios myself. > > A coworker of mine ordered his MicroBlaze kit last year and it took > over two months to show up. While I found that the Insight development > board was a pretty good softcore CPU development platform (as is the > Nios dev board), the MicroBlaze product itself is a bit (to say the > least) lacking. After receiving the kit and opening it up we found no > MicroBlaze CD! It showed up in a separate mailing on a generic CD-R, > as if it were burned on some engineer's cubicle CD burner. > > These tidbits aside, the MicroBlaze design flow is entirely script > based - without any GUI. Nios is also script based, but it is all > abstracted in their system builder software. In order to get a basic > m-blaze system with CPU and a couple peripherals (UART, general > purpose IO), a couple of pages of typing are required to instantiate > everything! One goof up and no system for you! The Nios kit GUI lets > you make an equivalent system in seconds. > > -- Jesse Kempa > > emanuel stiebler <emu@ecubics.com> wrote in message news:<3C7D300C.3B55= 34A1@ecubics.com>... > > Victor Schutte wrote: > > > > > > I have given up trying to get hold of it. > > > > Care to elaborate a little further ? > > > > > I am now hooked on NIOS ver 2.0. > > > Excellent IP. > > > > I'll check ;-) > > > > cheers & thanks
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z