Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
skyworld wrote: > Hi, > I am porting my design from ASIC to FPGA, but I found the timing in > FPGA is very bad. What I konw is to set up the clock constraints, I > don't know how to add other constraints. Can any boby give me some > hints? thanks very much. > I use Altera clone II. > > skyworld > Pipeline the design. Pay attention to the amount of combinatorial logic between registers.Article: 108476
David Ashley wrote: > David Ashley wrote: > >>I want to create a 1152 by 6 bit rom and I want to use >>[cut] > > > I found this thread... > http://groups.google.com/group/comp.lang.vhdl/browse_thread/thread/9d83482446592315/cb311613c24c9d31?lnk=st&q=this_rom+small_rom+a_length&rnum=3#cb311613c24c9d31 > > Tues Nov 13 2001 Mike Treseler suggests something, see my attempt below. > This synthesizes quickly but doesn't seem to use a BRAM, it > uses 400+ LUTs to do it...want to use BRAMS as they're > unused, and LUT's are in short supply... > BRAM must be clocked. Your design does not clock the memory, therefore it is forced to use LUT RAM.Article: 108477
Ray Andraka wrote: > BRAM must be clocked. Your design does not clock the memory, therefore > it is forced to use LUT RAM. I added a clock and now it's using BRAM + the LUT use count went down to 102 from 400+. That was easy. Thanks! -Dave -- David Ashley http://www.xdr.com/dash Embedded linux, device drivers, system architectureArticle: 108478
Thanks for your reply. I was skeptical myself. I do have my equations and my VHDL code. Behavioural simulates correctly, post-synthesis and translate simulates correctly, then post-map fails simulation abysmally. About 140-150 lines reporting removed "redundant logic" were reported by the mapper, in addition to two lines indicating VCC and GND were "optimized" away (see copy of first lines of output in my first post). I suspect that it is the removed logic that is needed to make the simulation work, because when I use "keep" statements, the simulation improves greatly, but is still half wrong at an early stage. However, I don't want to use kludges, I want the tool to recognize it is all needed from the way I write the VHDL. Your sample VHDL is pretty much what I have; I just have a lot of simultaneous such lines at each timing period for different bit ranges going to different bit ranges. What could be causing everything to work fine at the first three stages and then have the post-map stage fail its simulation so badly? I certainly suspect all the removed "redundant logic" that the mapper is reporting. But how to indicate it is not really redundant, without using "keep" and "save" statements everywhere? I can't even complain that all my signals are connected (they are), because that's not the problem: the mapper is not removing "unused logic". Best regards, -JamesArticle: 108479
skyworld schrieb: > Hi, > I am porting my design from ASIC to FPGA, but I found the timing in > FPGA is very bad. What I konw is to set up the clock constraints, I > don't know how to add other constraints. Can any boby give me some > hints? thanks very much. > I use Altera clone II. Learn the FPGA architecture and restructure your code to make it easier to map to it. For example most architectures have a LUT - carry logic - DFF chain. If you write somethin like if add then result = a + b; else result = a - b; end if; You must be extremely lucky with the tools to get less than three luts per bit with a critical path of two LUTs and a carry chain. If you write: if add then temp = b; else temp = -b; end if; result = a + temp; you need only one LUT per bit and the critical path is one LUT plus a carry chain. This is because the MUX is in front of the carry logic now and can be implemented in the same LUT as the non carry logic of the adder. There are more areas like these were you can make your design easier for the tools to optimize. Kolja SulimmaArticle: 108480
Hi Andreas, In article <ee4jap$6du$1@news.lysator.liu.se>, ehliar@lysator > Anyway, what I can confirm is that we have successfully synthesized > an older version of OR1200 with both ISE 7.1 and 8.1. The exact > version can be found on the course homepage at How much of the resources are used by the OR1200? Can you fit the OR1200, an SDRAM controller and the Ethernet in there? Thanks. John.Article: 108481
Hi, I was wondering what is most fast and efficient intepolation algorithm that easily adaptable in a Spartan 3 and able to process more than 60 frames per seconds? I have read a paper about an adaptive Newton interpolation algorithm that suppose to use less resources than the Bicubic and comparable with the image quality of the Bicubic. The algorithm will be used to scale the output of a LCD driver IC that outputs 18 to 24 bits RGB and FPGA will scale the image less than 18 bits and use on a smaller LCD screen. Any tips will be very helpful. Regards PArticle: 108482
Hello out there! I have purchased an Altera Nios II Cyclone based (EP1C12F324) evalution kit a while ago and didn't get to work with until recently. At first I was happy to see how quickly I got the board up and running and interacting with my PC through an Ethernet LAN and serving web pages using the uClinux on-board http server. Having had enough of the demo I started developping an application of my own. I first downloaded (via the USB cable connector that provides a USB Blaster interface) a very simple code file to start flashing the LEDs. When that didn't work, I simplified the code to an even simpler test case where only one LED was involved. After that not even working, I started combing through the documentation in search for some clues. This is when I noticed a lot of little discrepencies between the real board and the document set. To this day, I still haven't been able to get a simple LED to flash (not even once). I am getting a bit impatient with the whole thing and might just go back to a Xilinx board that is sitting around my ofice. Before I do that, I would like to know if any one out there has played with this board and might have some hints for me. Thanks in advance for any tid bits. P.S. When I download the code to the board using the Quartus II (6.0sp1), the download program seems all happy and returns with a "SUCCESS" message at the end of the process. The board on the other end always contains the same NiosII implementation running uClinux!!! Could it be that after the JTAG load is complete the FPGA resets and simply reloads the default application from the on-board configuration deice?Article: 108483
Weng Tianxiang wrote: > Daniel S. wrote: >> David Ashley wrote: >> Since routing multiple 32+bits buses consumes a fair amount of routing >> and control logic which needs tweaking whenever the design changes, I >> have been considering ring buses for future designs. As long as latency >> is not a primary issue, the ring-bus can also be used for data >> streaming, with the memory controller simply being one more possible >> target/initiator node. >> >> Using dual ring buses (clockwise + counter-clockwise) to link critical >> nodes can take care of most latency concerns by improving proximity. For >> large and extremely intensive applications like GPUs, the memory >> controller can have multiple ring bus taps to further increase bandwidth >> and reduce latency - look at ATI's X1600 GPUs. >> >> Ring buses are great in ASICs since they have no a-priori routing >> constraints, I wonder how well this would apply to FPGAs since these are >> optimized for linear left-to-right data paths, give or take a few >> rows/columns. (I did some preliminary work on this and the partial >> prototype reached 240MHz on V4LX25-10, limited mostly by routing and 4:1 >> muxes IIRC.) > > Hi Daniel, > Here is my suggestion. > For example, there are 5 components which have access to DDR controller > module. > What I would like to do is: > 1. Each of 5 components has an output buffer shared by DDR controller > module; > 2. DDR controller module has an output bus shared by all 5 components > as their input bus. > > Each data has an additional bit to indicate if it is a data or a > command. > If it is a command, it indicates which the output bus is targeting at. > If it is a data, the data belongs to the targeted component. > > Output data streams look like this: > Command; > data; > ... > data; > Command; > data; > ... > data; > > In the command data, you may add any information you like. > The best benefit of this scheme is it has no delays and no penalty in > performance, and it has minimum number of buses. > > I don't see ring bus has any benefits over my scheme. > > In ring situation, you must have (N+1)*2 buses for N >= 2. In my > scheme, it must have N+1 buses, where N is the number of components, > excluding DDR controller module. > > Weng In a basic ring, there needs to be only N segments to create a closed loop with N nodes, memory-controller included. Double that for a fully doubly-linked bus. Why use a ring bus? - Nearly immune to wire delays since each node inserts bus pipelining FFs with distributed buffer control (big plus for ASICs) - Low signal count (all things being relative) memory controller: - 36bits input (muxed command/address/data/etc.) - 36bits output (muxed command/address/data/etc.) - Same interface regardless of how many memory clients are on the bus - Can double as a general-purpose modular interconnect, this can be useful for node-to-node burst transfers like DMA - Bandwidth and latency can be tailored by shuffling components, inserting extra memory controller taps or adding rings as necessary - Basic arbitration is provided for free by node ordering The only major down-side to ring buses is worst-case latency. Not much of an issue for me since my primary interest is video processing/streaming - I can simply preload one line ahead and pretty much forget about latency. Flexibility, scalability and routability are what makes ring buses so popular in modern large-scale, high-bandwidth ASICs and systems. It is all a matter of trading some up-front complexity and latency for long-term gain. Since high-speed parallel buses end up needing pipelining to meet high-speed timings, the complexity and area delta between multiple parallel buses and ring-bus topologies is shrinking. -- Daniel Sauvageau moc.xortam@egavuasd Matrox Graphics Inc. 1155 St-Regis, Dorval, Qc, Canada 514-822-6000Article: 108484
Hi folks, I'm happy to announce that a binary demo package showing off embedded Linux on the Xilinx Spartan3E-500 Starter Kit (Revision C) is now available from PetaLogix. Go to www.petalogix.com and follow the links to Resources->Reference Designs to get the package. Supported board features include TCP/IP networking (via the Xilinx EthernetLite core), buttons, switches, LEDs and rotary encoder, flash, and an embedded webserver demo including web-based kernel image update. Please contact info@petalogix.com with any queries or for more info. Regards, JohnArticle: 108485
eziggurat@gmail.com wrote: > Hi, > > I was wondering what is most fast and efficient intepolation algorithm > that easily adaptable in a Spartan 3 and able to process more than 60 > frames per seconds? > > I have read a paper about an adaptive Newton interpolation algorithm > that suppose to use less resources than the Bicubic and comparable with > the image quality of the Bicubic. > > The algorithm will be used to scale the output of a LCD driver IC that > outputs 18 to 24 bits RGB and FPGA will scale the image less than 18 > bits and use on a smaller LCD screen. > > Any tips will be very helpful. > > Regards > P > If I had to do it I'd implement the method used by pnmscale. The results are great and would be very easy to implement with integer arithmetic. Source is in the netpbm package, google for it. The method scales down or up and is accurate. Note that his implementation uses floats the last time I checked, but there is absolutely no reason to do this, integers work fine if you just do some breshenham type tweaks. Even if you don't end up using this system, it's a worthwhile exercise working though the theory. Here's some comments from pnmscale.c: /* Here's how we think of the color mixing scaling operation: First, I'll describe scaling in one dimension. Assume we have a one row image. A raster row is ordinarily a sequence of discrete pixels which have no width and no distance between them -- only a sequence. Instead, think of the raster row as a bunch of pixels 1 unit wide adjacent to each other. For example, we are going to scale a 100 pixel row to a 150 pixel row. Imagine placing the input row right above the output row and stretching it so it is the same size as the output row. It still contains 100 pixels, but they are 1.5 units wide each. Our goal is to make the output row look as much as possible like the input row, while observing that a pixel can be only one color. Output Pixel 0 is completely covered by Input Pixel 0, so we make Output Pixel 0 the same color as Input Pixel 0. Output Pixel 1 is covered half by Input Pixel 0 and half by Input Pixel 1. So we make Output Pixel 1 a 50/50 mix of Input Pixels 0 and 1. If you stand back far enough, input and output will look the same. This works for all scale factors, both scaling up and scaling down. This program always stretches or squeezes the input row to be the same length as the output row; The output row's pixels are always 1 unit wide. The same thing works in the vertical direction. We think of rows as stacked strips of 1 unit height. We conceptually stretch the image vertically first (same process as above, but in place of a single-color pixels, we have a vector of colors). Then we take each row this vertical stretching generates and stretch it horizontally. */ -Dave -- David Ashley http://www.xdr.com/dash Embedded linux, device drivers, system architectureArticle: 108486
Hi Jack, This post may be useful: http://groups.google.com/group/comp.arch.fpga/browse_frm/thread/27cd7423127f3bbb/1fbb4ad5554ab9ca?lnk=st&q=device+and+pin+subroto&rnum=2#1fbb4ad5554ab9ca It deals with setting up Unused Pins for your projecs to the correct value. Hope this helps, Subroto Datta Altera Corp. "Jack Zkcmbcyk" <zkcmbcyk9@hotmail.com> wrote in message news:s5qNg.40493$8g4.523646@weber.videotron.net... > Hello out there! > > I have purchased an Altera Nios II Cyclone based (EP1C12F324) evalution > kit a while ago and didn't get to work with until recently. At first I > was happy to see how quickly I got the board up and running and > interacting with my PC through an Ethernet LAN and serving web pages using > the uClinux on-board http server. > > Having had enough of the demo I started developping an application of my > own. I first downloaded (via the USB cable connector that provides a USB > Blaster interface) a very simple code file to start flashing the LEDs. > When that didn't work, I simplified the code to an even simpler test case > where only one LED was involved. After that not even working, I started > combing through the documentation in search for some clues. This is when > I noticed a lot of little discrepencies between the real board and the > document set. > > To this day, I still haven't been able to get a simple LED to flash > (not even once). I am getting a bit impatient with the whole thing and > might just go back to a Xilinx board that is sitting around my ofice. > Before I do that, I would like to know if any one out there has played > with this board and might have some hints for me. > > Thanks in advance for any tid bits. > > P.S. When I download the code to the board using the Quartus II (6.0sp1), > the download program seems all happy and returns with a "SUCCESS" message > at the end of the process. The board on the other end always contains the > same NiosII implementation running uClinux!!! Could it be that after the > JTAG load is complete the FPGA resets and simply reloads the default > application from the on-board configuration deice? >Article: 108487
On 2006-09-12, John <bogus@bogus.ema> wrote: > Hi Andreas, > > In article <ee4jap$6du$1@news.lysator.liu.se>, ehliar@lysator >> Anyway, what I can confirm is that we have successfully synthesized >> an older version of OR1200 with both ISE 7.1 and 8.1. The exact >> version can be found on the course homepage at > > How much of the resources are used by the OR1200? Can you fit the > OR1200, an SDRAM controller and the Ethernet in there? Into what? We are using a XC2V4000 in this course so we can fit quite a lot. But the resource breakdown looks roughly like this after the design has been synthesized. LUT FF * cpu 5047 1345 * dma 654 254 * vga 816 755 * eth 2965 2337 * dct accelerator 1680 679 * camera 518 527 * sram controller 129 25 * sdram controller 187 70 * blockram controller 111 3 * UART 823 346 * wishbone bus 618 10 The resource utilization was extracted from the XDL file after synthesis, map and place and route. This means that some signals which belong to more than one module ends up in only one module after all. Some blockrams are also used but I haven't added RAMB16 extraction to my script yet. /AndreasArticle: 108488
John Williams schrieb: > Hi folks, > > I'm happy to announce that a binary demo package showing off embedded Linux on > the Xilinx Spartan3E-500 Starter Kit (Revision C) is now available from > PetaLogix. Go to > > www.petalogix.com > > and follow the links to Resources->Reference Designs to get the package. > > Supported board features include TCP/IP networking (via the Xilinx EthernetLite > core), buttons, switches, LEDs and rotary encoder, flash, and an embedded > webserver demo including web-based kernel image update. > > Please contact info@petalogix.com with any queries or for more info. > > Regards, > > John Dear John, a binary demo for Linux isnt much interesting or useful - everybody is waiting when does PetaLogix finally release the PetaLinux, but so far there has been to release date information announced by PetaLogix? Can we assume that the actual PetaLinux release date is coming closer also, or is PetaLogix still holding back information about possible release date? I see the 'binary demo' is still based on EDK 8.1 tools - so it can not fully support the MicroBlaze version 5, I wonder why hasnt PetaLogix used EDK 8.2 tools? To what I know PetaLogix had early access to EDK 8.2 (and GNU code?) and those would have been in the position to use the latest GCC toolchain. As of the '500e binary demo' I can only confirm that it boots in XSIM ver 1.1.a succesfully, free download here: http://www.xilant.com/index.php?option=com_remository&Itemid=36&func=select&id=3 AnttiArticle: 108489
lb.edc@telenet.be schrieb: > Hi Antti, > > I understood from our local FAE, it should become available very soon > as Lattice is building inventory. > The kit should be quite complete - no indication of price however. > The FAE told me that some new things are being tested for compliancy > (like SATA) and one can expect full support for this as well in the > near future. > > Luc > > On 11 Sep 2006 07:20:10 -0700, "Antti" <Antti.Lukats@xilant.com> > wrote: > > >Hi > > > >http://www.latticesemi.com/products/developmenthardware/fpgafspcboards/scpciexpressx1evaluationb.cfm > > > >I wonder if that board is available and if it really supports SATA as > >it advertized to support? > > > >On the website there is no price information what usually is bad news > >regarding actual board availability :( > > > >Antti Thanks, the board looks like nicy, and if it is also down to earth prices - it could be the first Lattice board for me to purchase for my collection (I only Lattice chips on self designed boards so far). AnttiArticle: 108490
Hi Group Im having problems with viewdraw schematic for VHDL. Im building a system with lots of modules, so Im using the schematic tool for making it more simpel. When creating symbols with busses i keep getting errors. What is the correct way of making a symbol from a schematic with busses, should I use in/out pins ?? or what furthermore after I have created a symbol, on of the pins have a "U?" beside it. How do I get around this problem?? Kind regards BrianArticle: 108491
Hi Ray & Kolja, thanks for your reply. The advice is very helpful, but the question is that the code is for ASIC design and is frozen. I just migate the code to FPGA to check its function. So do you have any suggestion on how to setup constraints? something like DC do in ASIC design? thanks best regards skyworld Kolja Sulimma wrote: > skyworld schrieb: > > Hi, > > I am porting my design from ASIC to FPGA, but I found the timing in > > FPGA is very bad. What I konw is to set up the clock constraints, I > > don't know how to add other constraints. Can any boby give me some > > hints? thanks very much. > > I use Altera clone II. > > Learn the FPGA architecture and restructure your code to make it easier > to map to it. For example most architectures have a LUT - carry logic - > DFF chain. > > If you write somethin like > if add then > result = a + b; > else > result = a - b; > end if; > > You must be extremely lucky with the tools to get less than three luts > per bit with a critical path of two LUTs and a carry chain. > > If you write: > if add then > temp = b; > else > temp = -b; > end if; > result = a + temp; > > you need only one LUT per bit and the critical path is one LUT plus a > carry chain. > > This is because the MUX is in front of the carry logic now and can be > implemented in the same LUT as the non carry logic of the adder. > > There are more areas like these were you can make your design easier for > the tools to optimize. > > Kolja SulimmaArticle: 108492
<kits59@gmail.com> wrote in message news:1158004737.330801.197250@i42g2000cwa.googlegroups.com... > Hello, > > Recently, I have been trying to simulate my system to verify that the > pieces are working correctly in my EDK project. In order to do this, I > need to use the SmartModel simulation tools for ModelSim 6.1e. I've > read through all of Xilinx's documentation and have set up the > modelsim.ini file correctly, but when I run the simulation, everything > hangs. There are no calls to the BRAM to fetch code for execution. > > The only thing I get that might be a part of the problem is the > following warnings: > > # ** Warning (SmartModel): > # Model is being requested to run at a finer resolution than > necessary. > I have no experience with EDK but that warning is OK, if I run my design with anything smaller than ps I get this warning. Given that your system hangs I suspect you are loading the wrong dll/sl/so or your libraries are out of sync with the model, check that your veriuser, libsm and libswift variables are all setup correctly, Hans www.ht-lab.com > > Anyone been able to do a system level simulation using SimGen and EDK? > > Jon >Article: 108493
Hello, i have created a fairly simple Microblaze-based system, just one core, opb-UART, BRAM, opb-timer and opb-intc. I wrote a small program using the xilkernel-OS which creates two threads printing a thread-specific number and the current clock_ticks. The code is given below. I expect the program to print each second a message of one thread, but that doesn't work. When executing the program both threads seem to enter the code fragment locked by the mutex nearly at the same time as can be seen by the printed clock_ticks. The mutex is declared static and initialized in the static thread created by xilkernel_main and before creating the "Hello world"-threads. <---------------------------> void thread_func(int number) { while (1) { pthread_mutex_lock(&print_mutex); sleep(500); xil_printf("Ticks = %d. Thread %d says: Hello world!\r\n", xget_clock_ticks(), number); sleep(500); pthread_mutex_unlock(&print_mutex); } } <---------------------------> Output via UART: Ticks = 362. Thread 1 says: Hello world! Ticks = 365. Thread 2 says: Hello world! Ticks = 464. Thread 1 says: Hello world! Ticks = 468. Thread 2 says: Hello world! Regards, AndreasArticle: 108494
Andreas Hofmann schrieb: > Hello, > > i have created a fairly simple Microblaze-based system, just one core, > opb-UART, BRAM, opb-timer and opb-intc. I wrote a small program using > the xilkernel-OS which creates two threads printing a thread-specific > number and the current clock_ticks. The code is given below. I expect > the program to print each second a message of one thread, but that > doesn't work. > > When executing the program both threads seem to enter the code fragment > locked by the mutex nearly at the same time as can be seen by the > printed clock_ticks. > The mutex is declared static and initialized in the static thread > created by xilkernel_main and before creating the "Hello world"-threads. > > <---------------------------> > void thread_func(int number) > { > while (1) > { > pthread_mutex_lock(&print_mutex); > sleep(500); > xil_printf("Ticks = %d. Thread %d says: Hello world!\r\n", > xget_clock_ticks(), number); > sleep(500); > pthread_mutex_unlock(&print_mutex); > } > } > <---------------------------> > > Output via UART: > Ticks = 362. Thread 1 says: Hello world! > Ticks = 365. Thread 2 says: Hello world! > Ticks = 464. Thread 1 says: Hello world! > Ticks = 468. Thread 2 says: Hello world! > > > Regards, > > Andreas hm, I think it each thread should print 1 message once a second and that is what you see as well, so everything is working? AnttiArticle: 108495
Yes, but if the program doesn't fit the BRAM defined, the compiler exits with an error. This is not the case. "Göran Bilski" <goran.bilski@xilinx.com> a écrit dans le message de news: 45050619.6010208@xilinx.com... > sjulhes wrote: >>>- How do you detect that is doesn't start? : >> >> Test program is a hello world print onto the UART, sometimes there is >> nothing on the hyperterminal >> >> >>>- Are you using external memory? >> >> NO >> >> >>>- Can the program fit totally in the internal BRAM? >> >> YES, hello world is quite light when it's alone !!! >> > > Not if you use printf, that takes around 40kb of code. > Just check the size of the program with the command "mb-size" and be sure > that it fits within the define LMB area. > >> >>>- What download cable are you using? >> >> JTAG DLC7 >> >> >> "Göran Bilski" <goran.bilski@xilinx.com> a écrit dans le message de news: >> edrjlu$9501@cliff.xsj.xilinx.com... >> >>>sjulhes wrote: >>> >>>>Hello all, >>>> >>>>We are trying to design a small microblaze design in a spartan 3 and the >>>>problem we have is that the microblaze does not always start when the >>>>bitsream is downloaded with JTAG. >>>> >>>>But when we implement the debug module it always works. >>>> >>>>Does anyone has a clue ? >>>>Is it timing problems ? Is there specific timing constraints to add for >>>>microblaze ? >>>>Software problem ( linker script is automatically generated by edk)? >>>> >>>>Any idea is welcome. >>>> >>>>Thank you. >>>> >>>>Stéphane. >>> >>>Hi, >>> >>>It's too little information to point out the what is wrong. >>> >>>- How do you detect that is doesn't start? >>>- Are you using external memory? >>>- Can the program fit totally in the internal BRAM? >>>- What download cable are you using? >>> >>>Göran Bilski >> >>Article: 108496
sjulhes schrieb: > Hello all, > > We are trying to design a small microblaze design in a spartan 3 and the > problem we have is that the microblaze does not always start when the > bitsream is downloaded with JTAG. > > But when we implement the debug module it always works. > > Does anyone has a clue ? > Is it timing problems ? Is there specific timing constraints to add for > microblaze ? > Software problem ( linker script is automatically generated by edk)? > > Any idea is welcome. > > Thank you. > > St=E9phane. when it doesnt start use impact and read back the status word look if GHIGH=3D1 (and that all other relevant bits are set properly) AnttiArticle: 108497
Antti schrieb: > Andreas Hofmann schrieb: > >> Hello, >> >> i have created a fairly simple Microblaze-based system, just one core, >> opb-UART, BRAM, opb-timer and opb-intc. I wrote a small program using >> the xilkernel-OS which creates two threads printing a thread-specific >> number and the current clock_ticks. The code is given below. I expect >> the program to print each second a message of one thread, but that >> doesn't work. >> >> When executing the program both threads seem to enter the code fragment >> locked by the mutex nearly at the same time as can be seen by the >> printed clock_ticks. >> The mutex is declared static and initialized in the static thread >> created by xilkernel_main and before creating the "Hello world"-threads. >> >> <---------------------------> >> void thread_func(int number) >> { >> while (1) >> { >> pthread_mutex_lock(&print_mutex); >> sleep(500); >> xil_printf("Ticks = %d. Thread %d says: Hello world!\r\n", >> xget_clock_ticks(), number); >> sleep(500); >> pthread_mutex_unlock(&print_mutex); >> } >> } >> <---------------------------> >> >> Output via UART: >> Ticks = 362. Thread 1 says: Hello world! >> Ticks = 365. Thread 2 says: Hello world! >> Ticks = 464. Thread 1 says: Hello world! >> Ticks = 468. Thread 2 says: Hello world! >> >> >> Regards, >> >> Andreas > > hm, I think it each thread should print 1 message once a second > and that is what you see as well, so everything is working? No. As the sleeping is done while the mutex is locked each thread should print its message every two seconds. As one tick is approximately 10 ms the message of thread 2 is printed 30 ms after the message of thread 1. So there must be something wrong. Regards, AndreasArticle: 108498
"jetq88" <jetq5188@gmail.com> wrote in message news:1157987349.644693.118260@e3g2000cwe.googlegroups.com... > hello all, > > I have experience with ARM microcontroller in C/C++ programming, now > job function force me to have to expand my skill into FPGA, don't know > anyone out there in the same shoes having experience with this > transition, which Hardware language, VHDL or Verilog HDL or System C or > Handle C will make my life easier? I guess if you master one, it > should make you quicker to jump into another one, but for me with > limited hardware experience, which one can make this transition much > smoother. > > thanks > > jet > In addition to all the good advice you have received so far I would add to not ignore SystemC. It is indeed correct that if your company is doing low complexity FPGA work than SystemC might be a slight overkill :-). However, if your company is doing complex SoC stuff than SystemC (and SystemVerilog) knowledge might be a very useful asset. Not sure were you are based but at least in the UK verification engineers get generally better paid that FPGA RTL designers. Hans www.ht-lab.comArticle: 108499
<james7uw@yahoo.ca> wrote in message news:1158029715.412496.198790@d34g2000cwd.googlegroups.com... > What could > be causing everything to work fine at the first three stages and then > have the > post-map stage fail its simulation so badly? A number of things....things I mentioned in earlier posts and see below for a quick summary. > I certainly suspect all > the removed > "redundant logic" that the mapper is reporting. But you have no basis for that suspicion. It might be the case that the synthesis process has a bug but you need to prove it.....and then open a service request on the company that has the bug in it. My point is don't let your objectivity be clouded by what you suspect, debug and prove. > But how to indicate it > is not > really redundant, without using "keep" and "save" statements > everywhere? And this is where you start spinning your wheels (in my opinion). Instead of simply debugging the post-map sim to the source of the discrepancy you're trying things based on a suspicion that is not proven. Let's say for the sake of argument that your suspicion is wrong about the removal of redundant logic and that the problem is a timing issue with your testbench instead. That would mean that every minute you spend chasing 'keep' and 'saves' etc. was wasted time. > I > can't even complain that all my signals are connected (they > are), because that's not the problem: the mapper is not removing > "unused logic". > This will sound like a dumb question on my part but what is the distinction in your mind between 'redundant logic' and 'unused logic'? The reason for my confusion at this point would be because you say the 'redundant' stuff is getting removed and yet there is no 'unused' logic getting removed. If by 'redundant' you mean the classical Boolean Logic 101 definition where you add redundant logic to act as 'cover' terms in your Karnaugh map to avoid race conditions then that is the most likely cause of your problems. Is this the type of logic that you are trying to 'keep' but is being mapped away as an 'optomization'? If it is, then the rest of this post probably doesn't apply and we can discuss this point further, but if it is not then keep on reading. One other source of 'optomization' is that an output of some entity is not really used. The logic for equation 'x' happens to reduce down to always being 'false'. This means that everything downstream of 'x' that depends on 'x' being true can never happen so it can be optomized away. It's not the fault of the optomizer removing redundant logic that's the way the original is coded. You probably already realize this but thought a quick 'Optomization 101' wouldn't hurt....but I also don't think focusing on what is being optomized is away is the way you need to go on this one (which is the reason on my first post I questioned you "Why..."). What you need to do is to simulate the post-map VHDL file and trace it back to why output signal 'x' at time t is set to '0' but when you use your original code it is '1'. Use the sim results from using your original code as your guide for what 'should' happen and the post-map VHDL simulation for what is actually happen and debug the problem. It could be that - There is some bug in the translation tool - Could be some setting in your build process - Could be timing related (i.e. your testbench is violating the setup/hold time requirements for the post-map model) - Probably other things too In any case, treat the fully post-map model as something to debug and find out the reason for the discrepancy and go from there. KJ
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z