Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
I need help solving a timing problem. I have two clocks one a fast one and a slower one, fclk and sclk. I use two flip flops to synchronize slower clock sclk to fclk, however sclk is used to clock some other items within the module. Is there a way that I can tell the tool to limit the path length from fclk FF -> synchronized sclk FF -> fclk FF to only one clock cycle of fclk?Article: 118401
On Apr 24, 3:25 pm, Matthew Hicks <mdhic...@uiuc.edu> wrote: > I'm working on a RTOS for the PowerPC chip on the Virtex-II XUP board. I > need an interrupt to trigger so I run my scheduler at regular intervals, > so I setup the PIT to trigger an interrupt at 1s (purely for testing reasons, > I will be going down to around 5ms). The program seems to pause when I finally > enable the interrupt and I see no signs that the handler has been run. I > looked at several references and spent a day shifting around my code, to > no avail. If someone could take a peek at my code below and offer assistance > that would be great. Thanks. > Removed code... > Xuint32 status; You may want to use volatile Xuint32 status; since you are changing the variable behind the back of the main program. The compiler doesn't know about what you do in interrupt routines so it may put this variable in a register and only read the memory once. Alan Nishioka alan@nishioka.comArticle: 118402
Today I tried the FIT timer interrupt and the watchdog interrupt. Both did the same thing. The program would work fine until the interrupt occured, then the program would freeze and do nothing. The best I could do is have the watchdog timer reset the core when it saw that the previous watchdog interrupt wasn't handled, which worked great. Again, I would really appreciate if someone could help push me past this sticking point. I have attached the new code for reference: volatile Xuint32 status; void wdtHandler(void *dataPtr) { status = 55; XTime_WDTClearInterrupt(); } int main(void) { Xuint32 DataRead; print("-- Entering main() --\r\n"); XExc_Init(); XExc_RegisterHandler(XEXC_ID_WATCHDOG_TIMER_INT, (XExceptionHandler)wdtHandler, (void*) 0); XTime_WDTSetPeriod(XREG_TCR_WDT_PERIOD_11 | XREG_TCR_WDT_RESET_CONTROL_01); XExc_mEnableExceptions(XEXC_ALL); status = 0; XTime_WDTClearInterrupt(); XTime_WDTEnableInterrupt(); while(status != 55) { DataRead = mfspr(0x10C); XExc_mDisableExceptions(XEXC_ALL); xil_printf("Timebase: %d\r\n", DataRead); XExc_mEnableExceptions(XEXC_ALL); } print("-- Exiting main() --\r\n"); return 0; } ---Matthew Hicks > I'm working on a RTOS for the PowerPC chip on the Virtex-II XUP board. > I need an interrupt to trigger so I run my scheduler at regular > intervals, so I setup the PIT to trigger an interrupt at 1s (purely > for testing reasons, I will be going down to around 5ms). The program > seems to pause when I finally enable the interrupt and I see no signs > that the handler has been run. I looked at several references and > spent a day shifting around my code, to no avail. If someone could > take a peek at my code below and offer assistance that would be great. > Thanks. > > #include "xparameters.h" > #include "stdio.h" > #include "xbasic_types.h" > #include "gpio_header.h" > #include "xexception_l.h" > #include "xtime_l.h" > Xuint32 status; > > void pit_InterruptHandler(void *dataPtr) > { > status = 55; > XTime_PITClearInterrupt(); > } > int main(void) > { > Xuint32 DataRead; > XExc_Init(); > XExc_RegisterHandler(XEXC_ID_PIT_INT, > (XExceptionHandler)pit_InterruptHandler, > (void*) 0); > XTime_PITSetInterval(100000); > XTime_PITEnableAutoReload(); > XExc_mEnableExceptions(XEXC_NON_CRITICAL); > print("-- Entering main() --\r\n"); > print("\r\nRunning GpioOutputExample() for LEDs_4Bit...\r\n"); > status = GpioOutputExample(XPAR_LEDS_4BIT_DEVICE_ID,4); > if(status == 0) > { > print("GpioOutputExample PASSED.\r\n"); > } > else > { > print("GpioOutputExample FAILED.\r\n"); > } > XTime_PITClearInterrupt(); > XTime_PITEnableInterrupt(); > print("Here\r\n"); > status = 0; > while(status != 55) > { > ; > } > print("-- Exiting main() --\r\n"); > > return 0; > } > ---Matthew Hicks >Article: 118403
Hi, I am using a Xilinx Ethernet MAC core on the OPB Bus. I would like to know how to drop an ethernet packet. I read the Length FIFO in the MAC to get the length of the packet that has been recieved. On the basis of the length and the time at which it has been recieved , now I decide whether the bus bandwidth is sufficient for me to transfer the packet out of the Recieve FIFO into a memory. If the bandwidth is not sufficient I have to drop the packet. The problem is that to drop the packet I have to remove the MAC recieve FIFO. To clear the FIFO I have to read the FIFO ( for as many times as the length of the recieved data) . But this approach uses up Bus Bandwidth , the very reason why I decided to drop the packet. The other option that I can think of is to write into the Read Packet FIFO Reset Register. But this just has the drawback that if I recieve lots of packets in a burst , then I will drop all of them. Any advice would be greatly appreciated Thank You VenuArticle: 118404
Well, I found the solution. When I regenerate the linker script the program works 100% correctly. Forgive my newness to things like this (I come from the high-level land of windows application programming not embedded programming), but is this something that I should have done from the start? ---Matthew Hicks > Today I tried the FIT timer interrupt and the watchdog interrupt. > Both did the same thing. The program would work fine until the > interrupt occured, then the program would freeze and do nothing. The > best I could do is have the watchdog timer reset the core when it saw > that the previous watchdog interrupt wasn't handled, which worked > great. Again, I would really appreciate if someone could help push me > past this sticking point. I have attached the new code for reference: > > volatile Xuint32 status; > > void wdtHandler(void *dataPtr) > { > status = 55; > XTime_WDTClearInterrupt(); > } > int main(void) > { > Xuint32 DataRead; > print("-- Entering main() --\r\n"); > > XExc_Init(); > XExc_RegisterHandler(XEXC_ID_WATCHDOG_TIMER_INT, > (XExceptionHandler)wdtHandler, > (void*) 0); > XTime_WDTSetPeriod(XREG_TCR_WDT_PERIOD_11 | > XREG_TCR_WDT_RESET_CONTROL_01); > XExc_mEnableExceptions(XEXC_ALL); > status = 0; > XTime_WDTClearInterrupt(); > XTime_WDTEnableInterrupt(); > while(status != 55) > { > DataRead = mfspr(0x10C); > XExc_mDisableExceptions(XEXC_ALL); > xil_printf("Timebase: %d\r\n", DataRead); > XExc_mEnableExceptions(XEXC_ALL); > } > print("-- Exiting main() --\r\n"); > return 0; > } > ---Matthew Hicks > >> I'm working on a RTOS for the PowerPC chip on the Virtex-II XUP >> board. I need an interrupt to trigger so I run my scheduler at >> regular intervals, so I setup the PIT to trigger an interrupt at 1s >> (purely for testing reasons, I will be going down to around 5ms). >> The program seems to pause when I finally enable the interrupt and I >> see no signs that the handler has been run. I looked at several >> references and spent a day shifting around my code, to no avail. If >> someone could take a peek at my code below and offer assistance that >> would be great. Thanks. >> >> #include "xparameters.h" >> #include "stdio.h" >> #include "xbasic_types.h" >> #include "gpio_header.h" >> #include "xexception_l.h" >> #include "xtime_l.h" >> Xuint32 status; >> void pit_InterruptHandler(void *dataPtr) >> { >> status = 55; >> XTime_PITClearInterrupt(); >> } >> int main(void) >> { >> Xuint32 DataRead; >> XExc_Init(); >> XExc_RegisterHandler(XEXC_ID_PIT_INT, >> (XExceptionHandler)pit_InterruptHandler, >> (void*) 0); >> XTime_PITSetInterval(100000); >> XTime_PITEnableAutoReload(); >> XExc_mEnableExceptions(XEXC_NON_CRITICAL); >> print("-- Entering main() --\r\n"); >> print("\r\nRunning GpioOutputExample() for LEDs_4Bit...\r\n"); >> status = GpioOutputExample(XPAR_LEDS_4BIT_DEVICE_ID,4); >> if(status == 0) >> { >> print("GpioOutputExample PASSED.\r\n"); >> } >> else >> { >> print("GpioOutputExample FAILED.\r\n"); >> } >> XTime_PITClearInterrupt(); >> XTime_PITEnableInterrupt(); >> print("Here\r\n"); >> status = 0; >> while(status != 55) >> { >> ; >> } >> print("-- Exiting main() --\r\n"); >> return 0; >> } >> ---Matthew HicksArticle: 118405
On 2007-04-26, Martin Thompson <martin.j.thompson@trw.com> wrote: > "mans" <(myname_here)_123456@yahoo.com> writes: > >> Thanks. >> I am giving Emacs a try. >> How can I setup Xilinx ISE to use emacs? > > Sorry, no idea. I run Emacs as a separate editor. ISE I just use to > run the flow (well, actually most of the time I use some batchfiles, > but on occasion ISE comes in handy...) Edit -> Preferences In the window that opens: ISE General -> Editors Change Editor to Custom and set the command line syntax to something like emacs "$1" where emacs might have to be the full path to your emacs executable. (I think you need to enclose it in quotes if you have installed it in for example C:\Program Files\emacs) If you find emacs useful you might want to check out emacsclient to see if that is something you might find useful. /AndreasArticle: 118406
Hi Martin, i fixed the problem of the JTAG by: 1.---Enabling FUSE Probe(Development Kit software) to open cards on the development kit 2.---iMPACT open new project and config to detect device by JTAG, once identity sucess, proceed to check skip config checkbox in the JTAG cosim block 3.Generate the target directory to C:\ direct and not through secondary directories like documents or desktop. Actually I was doing hardware-in-the-loop co simulation so i see the response both from the software design as well as from hardware JTAG co simulation. "Martin Thompson" <martin.j.thompson@trw.com> wrote in message news:u8xcg4t4d.fsf@trw.com... > "Bryan" <sfoo@xilinx.com> writes: > >> Hi all, thanks to your help, I have managed to solve the JTAG problem on >> my >> XTREME DSP Development Kit. >> > > Great news - what did you do to fix it? > >> However I had a response which attenuates all frequencies even though my >> design is low pass filter.. Anybody can help me with that because i >> checked >> through my design n it seems fine. It is a MAC based FIR 43 tap filter >> though. >> > > One thing immediately springs to mind - does the simulation do the > same thing as the real hardware? > > Cheers, > Martin > > -- > martin.j.thompson@trw.com > TRW Conekt - Consultancy in Engineering, Knowledge and Technology > http://www.conekt.net/electronics.htmlArticle: 118407
On Apr 25, 2:34 pm, "Ben Jones" <ben.jo...@xilinx.com> wrote: > "Pablo" <pbantu...@gmail.com> wrote in message > > news:1177502351.601731.113970@u32g2000prd.googlegroups.com... > > > Hi, I have a project with a big requeriment of memory. So I have > > decided to generate a linker script with every section to SDRAM. The > > problem is that I don't know how can I increase the "default memory > > area" for my app. The reason is that I do "xil_calloc", but when I put > > a big number I receive an error and I think I could do it in a Sdram > > with 32Mb. How can I increase the resources of my Sdram?. What is the > > section in "Linker Script" for doing it possible?. > > Erm, increase the heap size? > > -Ben- I have incresed heap_size, but it seems like sdram cannot use more than 60 kbytes.Article: 118408
"mans" <(myname_here)_123456@yahoo.com> writes: > Thanks. > I am giving Emacs a try. > How can I setup Xilinx ISE to use emacs? Sorry, no idea. I run Emacs as a separate editor. ISE I just use to run the flow (well, actually most of the time I use some batchfiles, but on occasion ISE comes in handy...) Cheers, Martin -- martin.j.thompson@trw.com TRW Conekt - Consultancy in Engineering, Knowledge and Technology http://www.conekt.net/electronics.htmlArticle: 118409
John Adair wrote: > > Finally first picture of Darnaw1 our PGA style FPGA board is here here > http://www.enterpoint.co.uk/moelbryn/darnaw1.html. More information on > pricing and spec in the next couple of days will appear on the > website. Those with eagle eyes can work it out the spec from the > picture. > > First shipments will have 16Mbit SPI flash to allow programming of the > FPGA but also to act as a code store for processors like MicroBlaze > implemented within the FPGA. There is also SDRAM on board. Small > numbers of this product will be available to ship next week. > > We would be interested to have feedback on this product and what you > like, and what we could improve on this product and the related series > of products we have planned. Looks very interesting. First I was scared about the many pins which makes it difficult to put it on a simple double sided PCB where you can only route one wire between two pins. But I think the solution is, to not use a complete 21x21 socket on the PCB but put single socket pins only on that DARNAW pins which are used in the design. This way you only need holes in the PCP for the actual used pins (and you even can mix normal pins with wire wrap pins). Do you plan to make a version with a SRAM instead of the SDRAM (something like the 1M x 16 Static RAM CY7C1061AV33 from CYPRESS)? Would be much easier to interface. And if there would be a Flash Memory on the same address and data bus as the SRAM, this would be an ideal platform for experimenting with simple processor designs.Article: 118410
Pablo schrieb: > I have incresed heap_size, but it seems like sdram cannot use more > than 60 kbytes. That's no SDRAM problem. As the Xilinx EDK documentation states: "The xil_* functions operate on a 64 kilobyte buffer, and allocate memory from that. The size of this buffer is fixed and cannot be changed currently. This is a limitation of the xil_* dynamic memory allocation routines." [oslib_rm.pdf/EDK 8.1] I don't know if this has changed in subsequent EDK versions. The PPC405 malloc/calloc does not suffer from this limitation. Best regards, AndreasArticle: 118411
The Altera Quartus II 7.0 software is officially only supported under Red Hat Linux Enterprise 3 and 4 and SUSE Linux Enterprise 9. It also works under the newer openSUSE Linux 10.2, but there are a few quirks: - Quartus hangs sometimes in a futex() system call when starting up. Aborting it with Ctrl-C and restarting it again is the only workaround I know so far. - Quartus depends on the "usbfs" file system being compiled into the kernel in order to operate the USB-Blaster cable. Unfortunately, usbfs is no longer compiled into the openSUSE 10.2 default kernel, which now uses the similar, but incompatible, "udev" driver instead (/dev/bus/usb instead of /proc/bus/usb). Therefore, to use the USB-Blaster cable, you need to recompile your kernel as described on https://bugzilla.novell.com/show_bug.cgi?id=210899 to include "usbfs". This is fairly trivial to do, but obviously requires root access. There are rumours that the next kernel update will reintroduce "usbfs". You also have to mount "usbfs" under /proc/bus/usb. - OpenSUSE 10.2 also no longer uses the "hotplug" system to start custom scripts when some device is plugged in, therefore the patch at http://www.altera.com/support/software/drivers/dri-usb_b-lnx.html is no longer applicable. A workaround is to do the necessary "chmod 666 /proc/bus/usb/.../..." yourself each time after plugging in the cable. (TODO: find out how to configure udevd to do the same) I hope that Altera will - figure out the strange occasional futex() hang on startup - compile using a newer version of libusb that also understands how to use /dev/bus/usb instead of the now deprecaded /proc/bus/usb - update http://www.altera.com/support/software/drivers/dri-usb_b-lnx.html to explain what to do on newer distributions that replaced hotplug with udev (probably simply involves adding some file to /etc/udev/rules.d) Markus -- Markus Kuhn, Computer Laboratory, University of Cambridge http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great BritainArticle: 118412
On 25 Apr., 10:54, Sean Durkin <news_ma...@durkin.de> wrote: > Udo wrote: > > Hello Antti, Peter and ..., > > > yep, some months later now - the same question... > > V5-FX? > > At X-Fest they said "second half of 2007", nothing more specific... > > -- > My email address is only valid until the end of the month. > Try figuring out what the address is going to be after that... yes :( Xilinx only confirmed at X-Fest that delay is 6 months (at least)... nothing more :( AnttiArticle: 118413
Hello Petter Petter Gustad schrieb: > Richard Klingler <richard_uclinux_net> writes: > >> Can for example Quartus or ispLever use the MAC address >> of a Bluetooth or WLAN dongle so when I buy a new PC >> I just need to move the Bluetooth/WLAN dongle? > > Altera and Lattice use flexlm. I have had difficulties in the past > with machines with multiple NIC's to make flexlm select the correct > one. > > I'm using a dedicated license server where the internal NIC is > disabled and a separate PCI NIC. If the server breaks down I set up a > new one and put the NIC into the new machine. PC's are much more > likely to fail than NICs. Of course you need to have floating license > for this type of setup. > > Petter I just tried with generating a license for ispLever Starter 6.1 with the MAC address from my D-LINK DWL-G122 adapter and it works out of the box (o; Also did it for Altera Quartus and it shows 2 NICs and it works with either license generated for each MAC address. So this is a nice way if you want to work on a project on several locations like office and home...so you just take your WLAN dongle with you (o; Not sure about Xilinx/Actel...but I just don't care about them (o; rickArticle: 118414
have you set in .mss file in xilkernel section PARAMETER sysintc_spec=<interrupt device>?Article: 118415
I have a system with a Spartan-3E with an SPI flash. I have been using the xspi.exe DOS program to configure the SPI flash. But now I'm wondering if the new ISE IMPACT 9.1 or later is able to configure the SPI flash though JTAG? I saw in XAPP974 for the Spartan-3A that it was possible at least for that device. But is the same possible for Spartan-3E?Article: 118416
On Apr 25, 11:37 am, Roman <plyas...@googlemail.com> wrote: > Hello! > > I am using a board with Virtex4 PPC405, external asynchronous SRAM > memory and EDK 8.2i. If application program resides in BRAM and I want > to write and read from SRAM, it is only possible if there is > instruction and data cache enebled and I add XCache_EnableCache in the > beginning of the code. So far it works. Then I tried to run > application from SRAM. So I generated linker script telling that the > program should be in SRAM. After I launched XMD and entered > > dow executable.elf > run > > it didn't work. When I tried to read downloaded code by mrd command > from SRAM, it showed zero values. > Furthermore, I tried to explicitly write a value to SRAM with mwr > command and read it afterwards, it showed zero value again. > > So I thought the problem is that cache is still not enabled (because > code is not running and XCache_EnableCache function was not executed). > Then I set in XMD debug options "Set XMD memory map for PPC405 > features" where it is possible to enable caches, but it didn't help. > > Parameter C_INCLUDE_BURST_CACHELN_SUPPORT in .mhs file is set to 1. > > Does anybody have any suggestions what it could be? > Thanks in advance! > > Best regards > Roman One way to track down the problem is to look at the sram lines with chipscope stimulating it with mwr, mrd. It could be that you were fooled in the first example, and data reads and writes looked good because they were being accessed via the cache only, and not beiIng posted to the sram. I believe chipscope is a good investment for what you are doing. A free evaluation is available from Xilinx. NewmanArticle: 118417
In news:1177343468.316440.109490@n76g2000hsh.googlegroups.com timestamped 23 Apr 2007 08:51:08 -0700, wallge <wallge@gmail.com> posted: "I don't know about ultraedit, but emacs VHDL mode does a wonderful job colorizing [..]" To be fair, please do not overrate what modes for Emacs accomplish. Many (maybe all) coloring modes for Emacs have a feature I have not noticed in any other text editing program: perceptible delays in coloring characters. In my experience coloring delays in Emacs tend to be less than a second (even for perceptible delays) (except for coloring HTML which takes more than two seconds for even recoloring this newsgroup post I am composing when I delete or add a quotation mark (I am composing with Emacs but not posting with Gnus, and this post is not in HTML but the file format which Lynx invokes Emacs with is deemed by Emacs to be HTML)), but I have not noticed temporarily incorrect coloring in other software. I have definitely noticed these delays in Emacs on a number of (>> 100MHz) machines on a number of operating systems with little processor utilization by other processes over a number of years and definitely for at least VHDL; Ada; TeX; and HTML. (On one occassion for Ada, Emacs never finished correctly coloring one character even though I provided it with many seconds and ordered it to print the file. I do not believe that I accidentally deactivated the mode before it was finished.) "I use it exclusively..." I believe that people who use text editing software tend to be of one of two types of people: people who insist on using the same text editing program pretty much all the time; and people who are happy to use just about any text editing program. I do not exclusively restrict my text editing to one program. " It also has a nice hierarchy browser and lots of other VHDL specific functionality built in." Different things suit different people. Sometimes VHDL mode is good for me when editing VHDL code, sometimes it is annoying (to me) (and it is not its fault): as a habit from Ada I sometimes accidentally type .. instead of downto and VHDL mode automatically replaces .. with =>. TeX mode is so much worse that I often turn it off for a TeX file if I want to use a straight quotation mark (") such as for one of the many Babel configurations which use " as an active character, because TeX mode replaces a straight quotation mark with curved quotation marks (`` or ''). "There are some nice cheat sheets available through a google search that have all the important keyboard shortcuts as well..." As the keyboard shortcuts can be redefined, those cheat sheets must enumerate a lot of shortcuts! :)Article: 118418
On Apr 24, 4:41 pm, Rebecca <pang.dudu.p...@hotmail.com> wrote: > Hello, > I am using the Simulation library compilation wizard in EDK9.1.01i and > was told that "Modlelsim isn't found! please ensure this simulator is > correctly intalled and/or the correspoind enviroment settings are > available" > I do have Modelsim 6.2e installed at my computer and it works. The > only system variable that I set for modlesim is about the license. > Should I set any other variables? > And EDK says it ony support Modelsim Se/Pe 6.1e. I am wondering if it > can support the later versions. But anyway, I installed Modelsim se > 6.1e and got the same result. > Any suggestion? > By the way, I can't get the wizard work successfully for EDK8.2 either > and I have to do it manually. Am I wrong somewhere? > Thanks a lot, > Rebecca in XPS under Project, launch EDK shell In the shell, run vcom it should give you some usage info if it does not, perhaps the system environmental PATH variable was not set to where the modelsim executables reside. NewmanArticle: 118419
Hi, I have written in this chat a pair of threads about Sdram and heap_size. I want to say that I think that heap_size cannot be changed with Microblaze. I need to do "xil_malloc(16*4096), but in SDRAM (32MB) I only can do "xil_malloc(15*4096)". Then I have increased "heap_size" in Linker Script since 0x400 to 0x1400000. I have put 24Mb for "heap_size" but it seems heap_size keeps as same as 0x400. First of all I don't understand how can I do "xil_malloc(15*4096)" when I have a heap_size of 0x400 (1024 bytes). Second. I don't know how can I increase my Memory Resources to malloc more memory. Thanks and I hope you could help me.Article: 118420
On Apr 26, 1:21 am, Venu <get2v...@gmail.com> wrote: > Hi, > > I am using a Xilinx Ethernet MAC core on the OPB Bus. I would like to > know how to drop an ethernet packet. > > I read the Length FIFO in the MAC to get the length of the packet that > has been recieved. On the basis of the length and the time at which it > has been recieved , now I decide whether the bus bandwidth is > sufficient for me to transfer the packet out of the Recieve FIFO into > a memory. If the bandwidth is not sufficient I have to drop the > packet. > > The problem is that to drop the packet I have to remove the MAC > recieve FIFO. To clear the FIFO I have to read the FIFO ( for as many > times as the length of the recieved data) . But this approach uses up > Bus Bandwidth , the very reason why I decided to drop the packet. > > The other option that I can think of is to write into the Read Packet > FIFO Reset Register. But this just has the drawback that if I recieve > lots of packets in a burst , then I will drop all of them. > > Any advice would be greatly appreciated > > Thank You > Venu Perhaps one could insert a custom block between the OPB bus and the ethernet MAC that could drain the FIFO the desired amount upon command without tying up the OPB bus. -NewmanArticle: 118421
Herbert It is our intention that that you only need to fit the pins you want or need. On the technology level we use on development boards we can squeeze 5 tracks between pins on this pitch but being more realistic for the likely usage you would get two maybe three between pins on a professionally manufactured, low tech, PCBs. So if you do only need say 60 I/O then you can probably get away with something like 100 pins fitted when you account for power etc.. Using say the out rows only would be an easy PCB layout and quite possibly a 2 layer implementation. As a benchmark I always like to point out that our product Raggedstone1 (low cost development board) has the entire I/O(264) used of a XC3S400-4FG456 and supporting 7-8 power segments on a 4 layer board and thats a 1mm ball grid that we dealing with. At the moment we are not looking at SRAM because it would not support our design target of running MicroBlaze on the module very well. Also the Spartan-3E has some SRAM already available internally. If someone comes to us with a project that needs a build of a reasonable number of modules we can probably do a new version for that need. To do that would take my team less that a day to change the SDRAM for a SRAM in the design and then we just down to manufacturing time and cost. Longer term there may be an intermediate module range that sits between Darnaw and Craignell ranges but we will wait and see how popular these new releases are. There are bigger Craignell's being planned already so these may be of interest. John Adair Enterpoint Ltd. "Herbert Kleebauer" <klee@unibwm.de> wrote in message news:46305DDB.88463A95@unibwm.de... > John Adair wrote: >> >> Finally first picture of Darnaw1 our PGA style FPGA board is here here >> http://www.enterpoint.co.uk/moelbryn/darnaw1.html. More information on >> pricing and spec in the next couple of days will appear on the >> website. Those with eagle eyes can work it out the spec from the >> picture. >> >> First shipments will have 16Mbit SPI flash to allow programming of the >> FPGA but also to act as a code store for processors like MicroBlaze >> implemented within the FPGA. There is also SDRAM on board. Small >> numbers of this product will be available to ship next week. >> >> We would be interested to have feedback on this product and what you >> like, and what we could improve on this product and the related series >> of products we have planned. > > Looks very interesting. First I was scared about the many pins which > makes it difficult to put it on a simple double sided PCB where you > can only route one wire between two pins. But I think the solution is, > to not use a complete 21x21 socket on the PCB but put single socket pins > only on that DARNAW pins which are used in the design. This way you > only need holes in the PCP for the actual used pins (and you even > can mix normal pins with wire wrap pins). > > Do you plan to make a version with a SRAM instead of the SDRAM > (something like the 1M x 16 Static RAM CY7C1061AV33 from CYPRESS)? > Would be much easier to interface. And if there would be a > Flash Memory on the same address and data bus as the SRAM, > this would be an ideal platform for experimenting with simple > processor designs.Article: 118422
["Followup-To:" header set to comp.arch.fpga.] On 2007-04-26, Terry Brown <tbrown@tyzx.com> wrote: > Is there a reason people don't suggest just using an always block in the > test bench? > > For example, I use: > > //this is a status heartbeat for batch mode operation > integer microseconds; > reg heartbeat; > always #10000 microseconds = microseconds + 10; > always @ (microseconds) > if (heartbeat) > $display("%d us",microseconds); > > This has the advantage on not requiring tcl (and the scripting is an > enhanced, pay for it option), and is portable to other simulators. The major advantage of using the TCL based approach is that you are sure that your simulation status will be printed with the same frequency, (more or less depending on the scheduling of the after command in TCL) regardless of CPU speed and design complexity. On the other hand, your approach is more useful in general since you are able to print much more interesting data than the simulation time such as for example the number of transactions your testbench has performed. /AndreasArticle: 118423
M. Hamed wrote: > I need help solving a timing problem. I have two clocks one a fast one > and a slower one, fclk and sclk. I use two flip flops to synchronize > slower clock sclk to fclk, however sclk is used to clock some other > items within the module. Is there a way that I can tell the tool to > limit the path length from > > fclk FF -> synchronized sclk FF -> fclk FF > > to only one clock cycle of fclk? Yes, there is a way. If you're using Xilinx tools, for instance, you would use a FROM/TO constraint which you can specify as the fclk period explicitly or with nanoseconds. You should find the FROM/TO syntax and details in the constraints guide. If you're using other tools, please specify which tools you're using so you can get specific help.Article: 118424
Hello all, This is a repeat from article 111605 from November 6-th to this group, see http://www.fpga-faq.com/archives/111600.html#111605 The problem is that BRAM's are synthesized at first, and than later thrown away, but only if more than halve of the available BRAM resource are used! The problem described does NOT occur in WebPACK 7.1i, but DOES occur in everything after that, including the 9.1i application version J.30. Does anyone has an ideas were to look? Please have a look at the problem description below. Sietse Achterop Computing Science department University of Groningen, The Netherlands ================================ Here again the problem in short: The design is the 8051-microcontroller, now version 1.5, with patches from http://www.oregano.at/ip/8051.htm The design works perfectly OK in WebPACK version 7.1i04, so I would assume that something drastic as this should not happen. The VHDL code also works on Altera FPGAs. I am using Spartan3 here, and I'm using the linux version on Debian on Pentium IV. The directory with the complete design is available at http://www.cs.rug.nl/~sietse/FPGA/8051-16k (the same, but cleaned up in a tar-file is at: http://www.cs.rug.nl/~sietse/FPGA/8051-256.tar.gz ) The memory used is described in http://www.cs.rug.nl/~sietse/FPGA/geheugen.vhdl All standard stuff I think. In the synthesis report, mc8051_top.syr, under Low Level Synthesis the following lines appear, which are the only suspicious lines: =========================== from mc8051_top.syr, line 887: INFO:Xst:2399 - RAMs <inst_Mram_mem1>, <inst_Mram_mem5> are equivalent, second RAM is removed INFO:Xst:2399 - RAMs <inst_Mram_mem2>, <inst_Mram_mem3> are equivalent, second RAM is removed INFO:Xst:2399 - RAMs <inst_Mram_mem2>, <inst_Mram_mem4> are equivalent, second RAM is removed INFO:Xst:2399 - RAMs <inst_Mram_mem2>, <inst_Mram_mem6> are equivalent, second RAM is removed INFO:Xst:2399 - RAMs <inst_Mram_mem2>, <inst_Mram_mem7> are equivalent, second RAM is removed ============================= end of copy If the inferred rams do not take half of the available BRAMs then these lines DO NOT appear and the generated design just works. Several 8051 programs are running perfectly, using data2mem and a bmm-file to fill the memory. If the lines appear, the blockrams are largely REMOVED and the bitfile does not work at all. Here a few fragments of the report: ========================================================================= Advanced HDL Synthesis Report Macro Statistics # RAMs : 3 16384x8-bit single-port block RAM : 1 256x8-bit single-port block RAM : 1 8192x8-bit single-port block RAM : 1 ========================================================================= * Final Report * ========================================================================= ........... Cell Usage : .......... # RAMS : 4 # RAMB16_S1 : 3 # RAMB16_S9 : 1 # Clock Buffers : 2 # BUFG : 1 # BUFGP : 1 # IO Buffers : 74 # IBUF : 38 # OBUF : 36 # MULTs : 1 # MULT18X18 : 1 ========================================================================= Device utilization summary: --------------------------- Selected Device : 3s400pq208-4 Number of Slices: 1520 out of 3584 42% Number of Slice Flip Flops: 597 out of 7168 8% Number of 4 input LUTs: 2894 out of 7168 40% Number of IOs: 75 Number of bonded IOBs: 75 out of 141 53% Number of BRAMs: 4 out of 16 25% Number of MULT18X18s: 1 out of 16 6% Number of GCLKs: 2 out of 8 25% ============================================================================ The number of BRAMs should we 13, as is inferred in the working WebPACK 7.1i version.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z