Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Try: http://www.alacron.com/downloads/vncl98076xz/CameraLink20v113.pdf There is a section in appendix C which shows how to hook up the bits for various configurations. Note that "Camera Link" is not a single image format standard, so it's best to get the information you need directly from the camera provider. HTH, Gabor ivan@gmail.com wrote: > Hi All, > > I have a project to get the image data by camera and want to convert > them as grayscale information. The camera has Camera Link output. It > can be connected directly to universal frame grabber. I am trying to > replace the frame grabber by FPGA. Does somebody know or has > information how to get camera link specification ? > > The document which I found mostly only dealed with the pin > configuration and didn't tell how to obtain the image pixel. I plan to > use DS90CR288A to convert from LVDS to parallel. After that I don't > know what should I do with parallel signal. > > > Thanks in advance, > > -ivanArticle: 110801
On 23 Oct 2006 05:57:30 -0700, "Antti" <Antti.Lukats@xilant.com> wrote: > >fsdg...@spone.com schrieb: > >> In article <1161605132.571315.149550@k70g2000cwa.googlegroups.com>, >> "Antti" <Antti.Lukats@xilant.com> wrote: >> >> > greg@accupel.com.nospam schrieb: >> > >> > > I'm trying to configure a Spartan 3 via Slave Serial mode at power up. >> > > I'm storing the configuration file in SPI Flash and using a uP to read >> > > the Flash and send the configuration bit stream (and clock) to the FPGA. >> > > (I've considered using the FPGA Master Serial mode to clock the SPI >> > > Flash, and just using the uP to initiate the flash read instruction, but >> > > the hardware is not currently configured that way, so I want to get it >> > > working in the Slave Serial mode first on my current hardware.) >> > > >> > > I've read Xapp 502 but it still leaves me confused on a couple of points. >> > > >> > > 1. The app note says a .bit file contains header info that should not be >> > > downloaded to the FPGA, so I'm trying to use a .bin file. However, I >> > > thought the header information allowed the clock rate to be increased in >> > > the Master Serial mode. Does the .bin file also include that >> > > information? (If I try to use Master Serial mode later.) >> > > >> > > 2. When I serialize the .bin file bytes into a bit stream, do I load >> > > the bits from each byte MSB or LSB first into the FPGA? >> > > >> > > 3. When I finish loading the entire .bin file I wait for DONE to go >> > > high, and while waiting test if INIT is low (which indicates a CRC >> > > error). So far I never get a DONE high or an INIT low. Seems like I >> > > should get one or the other? Configuration works fine using Platform >> > > Cable USB (JTAG). M0,M1,M2 are configured correctly in Slave Serial mode. >> > > >> > > Thanks for answers/suggestions. >> > >> > there is "File header" present in BIT file and >> > BITstream header present in BIT and BIN (and other files) >> > >> > it actually doesnt care if you dont strip the FILE header, >> > the simplest is usually just take the .BIT file, and: >> > >> > 1 send it as is to the DIN, DONE=1? you are lucky >> > 2 reverse bits in BIT, send to DIN DONE=1, if not something is wrong >> > >> > one of 2 should work >> > >> > Antti >> >> I don't understand what you are saying. Do you send the LSB first, or >> the MSB? >> >> Greg > >I try first either LSB or MSB, and the the other one. >so I dont have to figure out what is the right one, within >2 trial attempts it must work. Which is fine as long as there are no other problems. If there are, it doubles the number of permuations of things that may be wrong...!Article: 110802
"Mike Harrison" <mike@whitewing.co.uk> schrieb im Newsbeitrag news:2pvpj2p3rr5b30lg03j7ess679sglil2ia@4ax.com... > On 23 Oct 2006 05:57:30 -0700, "Antti" <Antti.Lukats@xilant.com> wrote: > >> >>fsdg...@spone.com schrieb: >> >>> In article <1161605132.571315.149550@k70g2000cwa.googlegroups.com>, >>> "Antti" <Antti.Lukats@xilant.com> wrote: >>> >>> > greg@accupel.com.nospam schrieb: >>> > >>> > > I'm trying to configure a Spartan 3 via Slave Serial mode at power >>> > > up. >>> > > I'm storing the configuration file in SPI Flash and using a uP to >>> > > read >>> > > the Flash and send the configuration bit stream (and clock) to the >>> > > FPGA. >>> > > (I've considered using the FPGA Master Serial mode to clock the SPI >>> > > Flash, and just using the uP to initiate the flash read instruction, >>> > > but >>> > > the hardware is not currently configured that way, so I want to get >>> > > it >>> > > working in the Slave Serial mode first on my current hardware.) >>> > > >>> > > I've read Xapp 502 but it still leaves me confused on a couple of >>> > > points. >>> > > >>> > > 1. The app note says a .bit file contains header info that should >>> > > not be >>> > > downloaded to the FPGA, so I'm trying to use a .bin file. However, I >>> > > thought the header information allowed the clock rate to be >>> > > increased in >>> > > the Master Serial mode. Does the .bin file also include that >>> > > information? (If I try to use Master Serial mode later.) >>> > > >>> > > 2. When I serialize the .bin file bytes into a bit stream, do I >>> > > load >>> > > the bits from each byte MSB or LSB first into the FPGA? >>> > > >>> > > 3. When I finish loading the entire .bin file I wait for DONE to go >>> > > high, and while waiting test if INIT is low (which indicates a CRC >>> > > error). So far I never get a DONE high or an INIT low. Seems like I >>> > > should get one or the other? Configuration works fine using Platform >>> > > Cable USB (JTAG). M0,M1,M2 are configured correctly in Slave Serial >>> > > mode. >>> > > >>> > > Thanks for answers/suggestions. >>> > >>> > there is "File header" present in BIT file and >>> > BITstream header present in BIT and BIN (and other files) >>> > >>> > it actually doesnt care if you dont strip the FILE header, >>> > the simplest is usually just take the .BIT file, and: >>> > >>> > 1 send it as is to the DIN, DONE=1? you are lucky >>> > 2 reverse bits in BIT, send to DIN DONE=1, if not something is wrong >>> > >>> > one of 2 should work >>> > >>> > Antti >>> >>> I don't understand what you are saying. Do you send the LSB first, or >>> the MSB? >>> >>> Greg >> >>I try first either LSB or MSB, and the the other one. >>so I dont have to figure out what is the right one, within >>2 trial attempts it must work. > > Which is fine as long as there are no other problems. > If there are, it doubles the number of permuations of things that may be > wrong...! sure. sending (or not) extra clocks is one possible issue. but trying the bit swap once says that there is one problem less to check at leat. AnttiArticle: 110803
Hmm... I am using a single clock. I have routed the bottom 6 bits of data (both into and out of the FIFO), along with the clock, read-enable and write enable signals. -> On writing...both functional and gate level seem to have data aligned properly with clock and write-enable signals. No problem here. -> On reading...gate level needs an additional clock to get data out of the FPGA. I may not have the "exact" setup for the external control signals that control the reading from the FPGA. I'm awaiting access to the legacy board design that has an external FIFO that uses the same control lines that I'm feeding the internal one in my design (I'm creating a "superset" of the existing FPGA design). Still running a bit confused on this. I worked with a local Arrow FAE and found that I needed to introduce a 1 ns clock delay on two of my 3 clocks (on the ones that are derived from the "master" clock source) to eliminate my hold timing issues. I've modified both functional and gated test benches to compensate for this issue. Still don't have a clue why the legacy synchronous FIFO mode is acting different between functional and gate level. Could someone explain this better to me? -Bob KJ wrote: > "Bob" <rjmyers@raytheon.com> wrote in message > news:4534F75A.45251BBE@raytheon.com... > >I have a number of Dual-Port Rams and FIFOs that I've implemented with > > Quartus II 6.0 sp#1, targeting a Stratix II device. > Dual clock or single clock? I'll assume from here on down that it is single > clock. If it's a dual clock memory then you go down a different path (I'll > discuss this at the end of the post). > > > > When I simulate with my test bench in the "functional" world, everything > > acts as > > expected. > Good > > > > > When I compile the .vho/.sdo files and run against a slightly modified > > test bench > > (differences in output file names and the test bench calling out the > > VITAL version > > of the FPGA implementation), it appears that there is an additional > > register delay > > that has been introduced into the Dual-Port/Fifo implementations. > > > > Is this normal? > No > > > Or do I need to make some type of adjustment for this > > behavior? I don't quite know how to explain/justify the differences at > > this > > point. > You don't justify them, you find the cause of them. The most likely > explanation is that your testbench is not adhering to the setup timing of > the final routed design. Check that the signals arrive at the input to the > design with at least the amount of time specified in the timing report > output file. If this is the problem then in essence your testbench is > 'violating' the timing of the device. When that happens lots of things can > go wrong, what you're seeing would be simply a symptom of that. > > If timing is not being violated and this is a single clock design then > you'll find that pre-route and post-route simulations match clock for > clock....without exception...regardless of what the actual design is. > > If you do have two clocks running around then where signals move from one > clock domain to the other than the 'extra' delay that you're seeing is not > unexpected and there isn't really anything that you can do about it. Just > like in the real world where you can't count on two async clocks to have any > phase relationship and clock domain crossing can cause an extra clock cycle > delay you'll see the same thing at times when comparing pre/post-route sims. > > KJArticle: 110804
Davy wrote: > I also want to know does Cadence provide such verification methodology > like Synopsys and Mentor. > >From what I found from this thread at http://verificationguild.com/modules.php?name=Forums&file=viewtopic&t=1404&postdays=0&postorder=asc&start=15 (check the last two posts), Cadence seems to have a verification methodology called UVM..Article: 110805
In article <453cbdee_1@news.bluewin.ch>, "Amontec, Larry" <laurent.gauch@ANTI-SPAMamontec.com> wrote: > > 2. When I serialize the .bin file bytes into a bit stream, do I load > > the bits from each byte MSB or LSB first into the FPGA? > send byte by byte with LSB first Thank you. It is difficult debugging hardware and software together when there is more than one thing wrong at the same time. So I wanted to remove this variable. > add more free CCLK edges until Done comes high ! I am already doing that. > Regards, > Laurent > www.amontec.com Thank you. I'll keep looking for other signal problems. GregArticle: 110806
I have just bought myself a Sparten 3 starter kit, I usually use a 2E kit in my lab and have never had any problems, however when trying to connect to the board using iMpact (part of ISE 8.2i) I get error 923, the console output is: Welcome to iMPACT // *** BATCH CMD : setMode -bs // *** BATCH CMD : setMode -bs GUI --- Auto connect to cable... // *** BATCH CMD : setCable -port auto AutoDetecting cable. Please wait. PROGRESS_START - Starting Operation. Connecting to cable (Parallel Port - LPT1). Checking cable driver. Installing WinDriver6...Installing WinDriver6... Installer exit code = 1. Successful. Connecting to cable (Parallel Port - LPT2). Checking cable driver. Installing WinDriver6...Installing WinDriver6... Installer exit code = 1. Successful. Connecting to cable (Parallel Port - LPT3). Checking cable driver. Installing WinDriver6...Installing WinDriver6... Installer exit code = 1. Successful. Connecting to cable (Parallel Port - LPT4). Checking cable driver. Installing WinDriver6...Installing WinDriver6... Installer exit code = 1. Successful. Connecting to cable (Usb Port - USB21). Invalid OS minor version = 2. Cable connection failed. PROGRESS_END - End Operation. Elapsed time = 3 sec. Cable autodetection failed. WARNING:iMPACT:923 - Can not find cable, check cable setup ! The board is on LPT1, and I have never had any problems with that parallel port as I use it to print when my board isn't connected! Anyone have and ideas? TIAArticle: 110807
John Kortink wrote: > I'd like to erase a number of EMP7064SLC44's that have > been programmed with JTAG disabled. Anyone have the > 'secret formula' or the special programmer equipment > that can do it ? As far as I know, a standard Data IO (or so) programmer should be able to do this - assuming you can put the device into the programmer's socket. If not, you've got yourself a problem. Best regards, BenArticle: 110808
greg@accupel.com.nospam wrote: >I'm trying to configure a Spartan 3 via Slave Serial mode at power up. >I'm storing the configuration file in SPI Flash and using a uP to read >the Flash and send the configuration bit stream (and clock) to the FPGA. >(I've considered using the FPGA Master Serial mode to clock the SPI >Flash, and just using the uP to initiate the flash read instruction, but >the hardware is not currently configured that way, so I want to get it >working in the Slave Serial mode first on my current hardware.) >I've read Xapp 502 but it still leaves me confused on a couple of points. >1. The app note says a .bit file contains header info that should not be >downloaded to the FPGA, so I'm trying to use a .bin file. However, I >thought the header information allowed the clock rate to be increased in >the Master Serial mode. Does the .bin file also include that >information? (If I try to use Master Serial mode later.) >2. When I serialize the .bin file bytes into a bit stream, do I load >the bits from each byte MSB or LSB first into the FPGA? >3. When I finish loading the entire .bin file I wait for DONE to go >high, and while waiting test if INIT is low (which indicates a CRC >error). So far I never get a DONE high or an INIT low. Seems like I >should get one or the other? Configuration works fine using Platform >Cable USB (JTAG). M0,M1,M2 are configured correctly in Slave Serial mode. Compare your uP output to that of a working download cable? Record the output of the uP and see if it really does what you expect? Will it work if you slow down the load process? (RF issues) Maybe you have to do some 16-32-64 bit swap etc.. ? (endian issues)Article: 110809
What is the interface voltage for the Spartan3 header? If it's 2.5V, the Programming Cable III has a little trouble working with the in/out levels. Try strapping the Vref to 3.3V instead. It powers the buffers in the Parallel Cable III. Just FYI: this isn't any trouble for the Parallel Cable IV or the USB Programming Cable. - John_H "Skyrunner" <dom.hewett@gmail.com> wrote in message news:1161632394.668147.74650@m73g2000cwd.googlegroups.com... >I have just bought myself a Sparten 3 starter kit, I usually use a 2E > kit in my lab and have never had any problems, however when trying to > connect to the board using iMpact (part of ISE 8.2i) I get error 923, > the console output is: > > Welcome to iMPACT > // *** BATCH CMD : setMode -bs > // *** BATCH CMD : setMode -bs > GUI --- Auto connect to cable... > // *** BATCH CMD : setCable -port auto > AutoDetecting cable. Please wait. > PROGRESS_START - Starting Operation. > Connecting to cable (Parallel Port - LPT1). > Checking cable driver. > Installing WinDriver6...Installing WinDriver6... > Installer exit code = 1. > Successful. > Connecting to cable (Parallel Port - LPT2). > Checking cable driver. > Installing WinDriver6...Installing WinDriver6... > Installer exit code = 1. > Successful. > Connecting to cable (Parallel Port - LPT3). > Checking cable driver. > Installing WinDriver6...Installing WinDriver6... > Installer exit code = 1. > Successful. > Connecting to cable (Parallel Port - LPT4). > Checking cable driver. > Installing WinDriver6...Installing WinDriver6... > Installer exit code = 1. > Successful. > Connecting to cable (Usb Port - USB21). > Invalid OS minor version = 2. > Cable connection failed. > PROGRESS_END - End Operation. > Elapsed time = 3 sec. > Cable autodetection failed. > WARNING:iMPACT:923 - Can not find cable, check cable setup ! > > > The board is on LPT1, and I have never had any problems with that > parallel port as I use it to print when my board isn't connected! > > Anyone have and ideas? > > TIA >Article: 110810
I can find any definitive info as to what the board uses for the JTAG interface, some people say 2.5V, other say 3.3V, how would I got about changing this? I can't seem to find anything in the board manual. I did intend to get the Parallel IV cable but I couldn't get one with my development board order (so I am using the bundled 3) and they seem to be quite hard to source in the UK. Cheers,Article: 110811
Yes, I am using mb 4.0. I built the kernel using xconfig. Please do keep me posted if you make progress. Thanks, Scott Francesco wrote: > Thanks Scott. > I'm usimg the ml403 (ISE8.1) > your link is very interesting. > I just started to debbug the kernel. > If I'll make any progress I'll send you an email. > Do you use xconfig to menuconfig to build the kernel? > I'm using xconfig... it should make any difference, but I have a friend > with experience in Linux and he suggested to use menuconfig. > > I also read that people has fixed this problem using microblaze 3.0 > (I'm using microblaze 4.0) > But I do not think this is the "real" problem, because the kernel is > running... what I need to do is "only" mount the root in the RAM. > Maybe using microblaze 3.0 we "mask" the problem... some setting will > be different and this errod does not happen...I want to go in deep and > fix it properly and then "share" my results with you. > > > Francesco. > > > ScottNortman wrote: > > I had the same problem... what FPGA are you using? I am using the > > spartan 3e starter kit. > > > > I spent some time looking around the web and I found a site which > > explains how to append the "root=" command properly; here is a link: > > > > http://www.ucdot.org/article.pl?sid=03/01/11/1049210&mode=thread > > > > However, even though I followed the instructions, I still got a new > > error: > > > > ********* location > > VFS test name = </dev/root> > > Micr > > VFS fs_name = <ext2>ash probe(0x21000000 > > > > VFS fs_name = <romfs> 21000000 > > VFS root name <1f:01> > > ********* > > arena open of 1 failed!evice at location zero > > > > VFS: tried fs_name = <ext2> err = -19 > > > > > > > > Hope this helps; if you make any progress please let me know. > > > > Thanks, > > Scott Nortman > > > > > > David Ashley wrote: > > > Francesco wrote: > > > > Hi I'm trying to porting uclinux using microblaze 4.0. > > > > When I try to run the OS I've got the following error message. > > > > > > > > Kernel panic: VFS: Unable to mount root fs on 1f:00 > > > > > > > > Does anybody had a similar problem? > > > > > > > > Thanks in advance, > > > > Francesco > > > > > > > > > > Linux is up but it can't mount the root > > > partition. What is your kernel command line? > > > What is the "root=xxx" specifically. That device > > > number 1f:00 seems screwy. It's not listed in > > > include/linux/major.h. > > > > > > -Dave > > > > > > -- > > > David Ashley http://www.xdr.com/dash > > > Embedded linux, device drivers, system architectureArticle: 110812
Hello. I have been trying to write code that should infer a ROM using Block RAMs. The target device is spartan II series FPGA. I have come up with the following code which does get synthesized properly and also gets synthsized. But i wonder if I have been able to acheive my objective of realising ROMs using block RAM as the synthesis report never shows anything that any block RAM was ever used. Heres the code: LIBRARY ieee; USE ieee.std_logic_1164.all; USE ieee.std_logic_arith.all; USE ieee.numeric_std.all; USE ieee.std_logic_unsigned.all; ENTITY BLOCKROM_Coeffs IS PORT( clk : IN std_logic; reset : IN std_logic; en : IN std_logic; addr : IN std_logic_vector(7 downto 0); data : OUT std_logic_vector(15 downto 0) ); -- Declarations END ENTITY BLOCKROM_Coeffs ; -- ARCHITECTURE blkram_ROM OF BLOCKROM_Coeffs IS type rom_type is array(255 downto 0) of std_logic_vector(15 downto 0); constant ROM : rom_type:=(others=>X"0000"); attribute rom_extract : string; attribute rom_style : string; attribute rom_extract of ROM : constant is "yes"; attribute rom_style of ROM : constant is "auto"; BEGIN process(clk) begin if (clk'event and clk = '1') then if (en = '1') then data <= ROM(conv_integer(addr)); end if; end if; end process; END ARCHITECTURE blkram_ROM; As far as I know, I think it is because I have not used the Block_ram attribute in the code..but i've seen a couple of code examples in the newsgroup and on the internet and no where have i come accross such an attribute for inferriing a ROM out of Block RAMs. Second question: Ive seen two major methods when inferring memory. One of them talks about using library primitives in which there is no definition for the architecture of the memory (or may be it is defined somewhere else)..It directly instantiates a component for ex. a library primitive for inferring a ROM which has a data bus which is 1 bit wide and has a 16 bit long word is called ROM16X1 (http://toolbox.xilinx.com/docsan/xilinx8/books/data/docs/lib/lib0363_...). Does it mean that if I instantiate a component using this library primitive then I dont need to worry about the internal architecture right? Just use it directly and all works fine and gets synthesized all right..isnt it? Additionally some of them are called macros whereas some of them are called primitives..could some one please shed some light into this. (heres the xilinx link abt memory elements : http://toolbox.xilinx.com/docsan/xilinx8/books/data/docs/lib/lib0039_... ) The second one is directly writing HDL code which is more of a behavioral description of what the memory does.. Whats the difference between the two methods? Additionally do these methods work with synthesis tools.. I've even heard that one has to do different kinds of memory initialisations when simulating and when synthesizing.. Like it would be different when i wanna simulate the behavior using modelsim and when Im synthesizing it. Theres a lot of scattered information everywhere but no where can i find information about what difference does it make with these two methods and when is one supposed to be using what. Hoping to hear from youArticle: 110813
Michael Sch=F6berl wrote: > > what am I doing wrong? > I think 300 MHz is still within the specs .. > are there any issues at that speed? > is the DCM placement critical at 300 MHz? > I don't know much about PPC's, but here's some thoughts on DCM problems: Assuming you've covered all the basics ( clean power, same bank SSO OK, proper DCM constraints and attributes, static timing met, appropriate DCM resets and DCM-in-lock- with-CLKFX-working status bit monitoring logic, DCM startup works in simulation, FPGA not overheating ) Q1) What is the DCM topology used to generate all of those related clocks? ( any DCM cascade, which DCM outputs in use, etc) Q2) How are you loading your bitstream onto the FPGA? ( JTAG / PROM / other ) Q3) What are the topside markings on your V2Pro ? Q4) How many parts/boards exhibit this problem ? Have you looked at: - V2Pro errata for your chip version/stepping - Answer Record 13756 basic DCM care and feeding - Answer Records 14425,19005 proper DCM startup and reset - Answer Record 10972 other DCM status bits to monitor - XAPP685 "Duty Cycle Distortion" correction macro for V2Pro DCM's + corner DCM's have problems in some parts - Answer Record 15130 magic bitgen "Centered_x#y#" option setting for DCM zeroes phase shift delay tap setting - Answer Record 20585 certain small integer CLKFX ratios don't work magic bitgen "PLcentered_x#y#" option zeroes clkfx setting - Answer Record 11778 DCM startup issues with JTAG download post listing other DCM quirks: http://groups.google.com/group/comp.arch.fpga/msg/6e5b0b6da92b4ad1 Other suggestions: - Is the DCM locked to a specific location near both the clock input and BUFG? Often the tools will do a horrible automatic placement, routing the DCM nets up and down the chip spine instead of placing such that direct connects can be used. - If you suspect a high frequency duty cycle problem, in addition to the bitgen "center" options, also try changing the DCMs' DESKEW_ADJUST to SOURCE_SYNCHRONOUS ( which disables the DCM feedback delay line ) - Monitor the internal clock by using a DDR output FF, forwarding the clock ( Answer Record 12406 ) through a terminated LVDS output buffer appropriately probed by a high BW scope. BrianArticle: 110814
I am evaluating using the Altera Cyclone with Quartus SOPC vs. Xilinx Spartan3E and PicoBlaze. I need a soft core processor and I think PicoBlaze would be enough. SOPC and Nios-II is very powerful but the learning curve looks like a potential nightmare to me. In order to use SOPC I might have to get involved writing custom components to do the job and then one has to master the Avalon interface. That looks like a lot of potential debugging time. The Xilinx solution seems more direct, and under my control, since PicoBlaze is stand alone and does not depend on so many bus interrelated components and SOPC infrastructure. Easier and quicker to write direct interfaces. Nios seems to need much more of the SOPC (RAM,ROM,Avalon,etc) around it to work. Also, it seems like the Nios/SOPC solution is likely to require far more gates than a Xilinx/PicoBlaze implementation. I would be curious to know any of your experiences with SOPC/Nios-II. I have very limited R&D time for this project. Thanks, Chris.Article: 110815
Chris wrote: > I would be curious to know any of your experiences with SOPC/Nios-II. I > have very limited R&D time for this project. I guess it depends on *what* you want to interface to the NIOS. Avalon is compatible with Wishbone, so if you're going to be interfacing wishbone components then it's a no-brainer. Otherwise, it's no more difficult than writing wishbone wrappers for your non-wishbone component. Of course some components already exist, such as an SDRAM controller for example. You also get I/O blocks such as registers and GPIO that can have associated NIOS interrupts, UARTS, etc. Putting it all together is quite straight-forward in the SOPC interface. You get a visual representation of the bus interconnects and can massage the memory map as you see fit. Arbitration for multiple bus-masters is automagically taken care of. You also get to choose the size/complexity of the NIOS itself - IIRC the smallest is 600-700 LEs?!? Once you've decided on your architecture, you 'generate' the system which produces a mass of HDL files in your main project directory. Then instantiate the top level in your Quartus design and you're good to go. The software is written from within the NIOS-II IDE. You simply build a 'system library' for your design and then start on your application. Personally I hate Eclipse with a passion so I do all my editing outside the IDE, and use it solely for the big red 'RUN' button. uCOS is also an option, or you can go bare-bones with the HAL library. You get drivers for all system components and isr hooks etc. If you actually already know what you're looking for, you can usually find it in the doco! ;) There's also nice things like a JTAG UART which allows you to spit debug messages directly to the IDE console. During development you can target your CODE sections to (SD)RAM and download/run everything via JTAG so there's no need for ROM emulators etc. I haven't used the debugger extensively, but it kinda works. Overall, it's relatively "painless" once you get the hang of things. That certainly can't be said for a lot of embedded systems, let-alone soft-cores running in FPGAs.... I can't comment on Xilinx/PicoBlaze... Regards, -- Mark McDougall, Engineer Virtual Logic Pty Ltd, <http://www.vl.com.au> 21-25 King St, Rockdale, 2216 Ph: +612-9599-3255 Fax: +612-9599-3266Article: 110816
Hello, I'm having a problem when Data2Mem runs and was wondering if someone could help me out. I have a simple dual PPC system, with both procs on the same plb bus. I also have two brams and the ddr memory on the same plb bus as the PPC cores as well. I'm trying to run a separate program on each PPC core (i.e. I am not trying to have the procs communicate with eachother through a shared memory). If I create a SW project and execute it on PPC405_0, the design sysnthesizes without any problems and executes the code correctly. However, when I create a second SW project and tell it to execute on PPC405_1, I get the following error message when data2mem runs when I try to download it to the board: "ADDRESS_SPACE or ADDRESS_MAP tag name 'ppc405_1' was not found. Some data may have not been translated." I've tried having both programs load into the same bram, as well as into separate brams (1 for each PPC core), but I get the error both ways. I think I may have to modify the .bmm file, but I don't know which one I would have to modify as there is a system.bmm, system_bd.bmm, and system_stub.bmm in my project's implementation folder. I'd greatly appreciate it if someone could give me some insight into fixing this problem. Also, as an aside, does anyone know if it is possible to make a .ace file that has two .elf files? My end goal is to have two seperate monta vista linux kernels (one executing on each PPC core), so I would like to just create the .elf files for each OS and be able to put them on a CF card and just boot both OSes from that. Again, any help would be greatly appreciated. Thanks for your time. Resepctfully, SteveArticle: 110817
On Mon, 23 Oct 2006 06:53:24 -0700, gerd <gerd.van.den.branden@ehb.be> wrote: >Hi, > >I want to use the FSL bus to connect two microblazes together. Because I am not familiar with the FSL bus, I first added a peripheral core using the configure coprocessor wizzard, to instantiate a default paripheral, together with the FSL drivers. However, when I run the SW application it stalls when writing to the FSL bus. I don't get any response afterwards. > >First, I did not made any changes to the instantiation of the fsl bus. Later on, I changed the clock connections (later on, also in the fsl's mpd file). Unfortunately, none o these measurements made any difference. > >Is there anyone that can provide me an example design that successfully uses the FSL bus, together with the application software (or a stripped version)? I can not find any satisfying reference designs on the web, nor on the Xilinx site. > >All help would be welcome!! > >Thanks, > >Gerd You must instantiate two fsl bus, to connect two uBlaze, one for each sense of communication. In each instance, you should connect FSL_Clk,Sys_Rst, FSL_M_CLK and FSL_S_Clk (all three clocks together). Bot uBlazes should work at same clock frequency You should also use the non-blocking versions of read/write functions (nput, ncput, nget y ncget). ANd thta is all there is to it! ZaraArticle: 110818
How do you set up a differential DDR output clock? In the Spartans there was a DDR register where you would tie one data to 1 and the other to 0 at the output pin flipflop. The Virtex4 has OSERDES modules, do I use them? Brad Smallridge aivisionArticle: 110819
I have design in which, all the data will be stored in memory (Buffer). For that design i am writing a test document in which i want to make use of the data of that design.I dont have a direct access to my memory . so i have considered a idea as we have seperate memory in my test Document side . and will write a another memory controller which acts as replicator, so as to see that all the memory contents of the design will be transferred to the memory which exists on the side of test document. Pl give suggestion on this idea... Regards SenthilArticle: 110820
> The document which I found mostly only dealed with the pin > configuration and didn't tell how to obtain the image pixel. I plan to > use DS90CR288A to convert from LVDS to parallel. After that I don't > know what should I do with parallel signal. After you use that chip, the data should be available on certain lines, if it's a "base" configuration: camdat:process(clk) begin if( clk'event and clk='1') then -- eight bit output camdat_out(0) <= '0'; camdat_out(1) <= cam_in(0); camdat_out(2) <= cam_in(1); camdat_out(3) <= cam_in(2); camdat_out(4) <= cam_in(3); camdat_out(5) <= cam_in(4); camdat_out(6) <= cam_in(6); camdat_out(7) <= cam_in(27); camdat_out(8) <= cam_in(5); -- twelve bit output -- camdat_out(0) <= cam_in(3); -- camdat_out(1) <= cam_in(4); -- camdat_out(2) <= cam_in(6); -- camdat_out(3) <= cam_in(27); -- camdat_out(4) <= cam_in(5); -- camdat_out(5) <= cam_in(7); -- camdat_out(6) <= cam_in(8); -- camdat_out(7) <= cam_in(9); -- camdat_out(8) <= cam_in(12); -- testoutput -- camdat_out <= cam_col_count; end if; end process; You didn't say what language or chip you are using. Brad Smallridge aivisionArticle: 110821
Stevo_V2pro wrote: > I'm having a problem when Data2Mem runs and was wondering if > someone could help me out. I have a simple dual PPC system, with both > procs on the same plb bus. I also have two brams and the ddr memory on > the same plb bus as the PPC cores as well. I'm trying to run a > separate program on each PPC core (i.e. I am not trying to have the > procs communicate with eachother through a shared memory). If I create > a SW project and execute it on PPC405_0, the design sysnthesizes > without any problems and executes the code correctly. However, when I > create a second SW project and tell it to execute on PPC405_1, I get > the following error message when data2mem runs when I try to download > it to the board: > > "ADDRESS_SPACE or ADDRESS_MAP tag name 'ppc405_1' was not found. Some > data may have not been translated." The last time I tried this it was simply a matter of specifying the software under the applications tab in XPS. Is this where you are making changes? Is "Mark to Initialize BRAMs" set on both projects? I also started with a dual ppc example project from Xilinx, so it might be a good idea to download that from the Xilinx site. Alan NishiokaArticle: 110822
enavacchia@virgilio.it ha scritto: > > Any suggestions? > Thank you all for your extensive explanations and suggestions! The i486 constraint comes from the main specification to not modify any part of the operating system. I've the complete source code (written many years ago in PL/M386), but we decided that porting to another platform or re-validate another core (pentium, transmeta, via, geode ...) is not possible... Maybe we'd better reconsider the specifications! I think we'll use FPGA for glue logic, ram interfacing and dual port ram emulation, and leave the processor as an off the shelf device... Regards, Eugenio.Article: 110823
Why do you want to use a separate chip for LVDS to paralell? Is the frequency too high? Spartan3 (or other) has LVDS integrated in IOB. Cheers, Guru ivan@gmail.com wrote: > Hi All, > > I have a project to get the image data by camera and want to convert > them as grayscale information. The camera has Camera Link output. It > can be connected directly to universal frame grabber. I am trying to > replace the frame grabber by FPGA. Does somebody know or has > information how to get camera link specification ? > > The document which I found mostly only dealed with the pin > configuration and didn't tell how to obtain the image pixel. I plan to > use DS90CR288A to convert from LVDS to parallel. After that I don't > know what should I do with parallel signal. > > > Thanks in advance, > > -ivanArticle: 110824
Thanks a lot, I am using VHDL and Xilinx XC2V1000. However because of lack number of I/O pins (I only have 30 pins for this purpose, because before it was designed to get image directly form camera and a little bit surprised when it come to me with PCMCIA framegrabber). Right now I am thinking to use small CPLD or FPGA only for camera. Now I am understand how to get the image data, but still have question about camera control. Does somebody know the signal that needs to be sent to controll the camera ? Again, thank you very much for all your help. -ivan Brad Smallridge wrote: > > The document which I found mostly only dealed with the pin > > configuration and didn't tell how to obtain the image pixel. I plan to > > use DS90CR288A to convert from LVDS to parallel. After that I don't > > know what should I do with parallel signal. > > After you use that chip, the data should be available on certain > lines, if it's a "base" configuration: > > camdat:process(clk) > begin > if( clk'event and clk='1') then > -- eight bit output > camdat_out(0) <= '0'; > camdat_out(1) <= cam_in(0); > camdat_out(2) <= cam_in(1); > camdat_out(3) <= cam_in(2); > camdat_out(4) <= cam_in(3); > camdat_out(5) <= cam_in(4); > camdat_out(6) <= cam_in(6); > camdat_out(7) <= cam_in(27); > camdat_out(8) <= cam_in(5); > -- twelve bit output > -- camdat_out(0) <= cam_in(3); > -- camdat_out(1) <= cam_in(4); > -- camdat_out(2) <= cam_in(6); > -- camdat_out(3) <= cam_in(27); > -- camdat_out(4) <= cam_in(5); > -- camdat_out(5) <= cam_in(7); > -- camdat_out(6) <= cam_in(8); > -- camdat_out(7) <= cam_in(9); > -- camdat_out(8) <= cam_in(12); > -- testoutput > -- camdat_out <= cam_col_count; > end if; > end process; > > You didn't say what language or chip you are using. > > Brad Smallridge > aivision
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z