Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On Aug 3, 8:42 am, Ray Andraka <r...@andraka.com> wrote: > Rob wrote: > > Peter, > > > I'm not being a wise guy by an stretch. I am seriously inquiring for my own > > edification. > > > Name some things that Xilinx can do that Altera can't or more specifically, > > is there an application out there in the world today that can only be solved > > with a Xilinx part? Would it be a fair statement to say that >90% of the > > designs out there can be done effectively with either company and the real > > advantages lie more with: > > > 1. Service > > 2. Tools > > 3. Price > > 4. Delivery > > 5. Familiarity: meaning that once someone starts with a certain vendor, > > uses them for a while, and understand the hardware and how to flex it, > > they're less likely to switch--especially since most of us are put on > > ridiculous delivery schedules. > > > Again, not trying to raise ire, just humbly asking a question. > > > Best regards, > > Rob > > The different devices have different features. As a result a design for > the same end purpose executed in an Altera device may be very different > from a design executed in a Xilinx device. This is particularly true if > you are pushing the performance or density envelopes. > > An example of this is a design that needs many very small memories > (short reorder queues for example) might be considerably more compact in > Xilinx by taking advantage of the SRL16 primitives. A design with > strictly 8 or 9 bit arithmetic and lots of multiplies might be a better > fit to Altera Stratix because of it's 9x9 multipliers. > > It isn't a matter of one device being unsuitable as much as it is an > issue of one device being better suited. Ray's point hit it on the head. Be aware however, that you need to actually power and cool the resulting design too. Many SRL16's in a design may lead to interesting data dependent power issues and cooling problems at best case clock rates. If FCS dates are critical to company success, you might consider prototyping both vendors solutions early, and once you have working designs let the price/availability shoot out occur. Avoids late discoveries about suitability, and leaves open vendor choices longer, while impacting the R&D budget less than a few percent for most larger projects.Article: 122701
On Fri, 03 Aug 2007 09:57:56 -0700, Mike Treseler <mike_treseler@comcast.net> wrote: >Evan Lavelle wrote: > >> I have to admit that I don't understand what the big deal in FPGA >> timing is. All the hard work has already been done by the big-$ tools, >> before the customer even starts a design. > >For a conservative designer with a rational >synchronization plan, I agree. Certainly I >can't move devices around, and I find it >difficult to beat the vendor tools at floorplanning. I'm not sure that I made myself clear - my point was that (a) it must surely be a lot easier for an FPGA vendor to write an FPGA timing analyser, than for an EDA vendor to write a 'real' analyser, and (b) that, at first sight, it seems pointless to additionally run PT as well as the FPGA vendor tools. The only thing that PT can bring to the party is, I think, an ability to effectively query the timing database. It can't give you any new numbers, since it only uses the information that the FPGA vendor gave it in the first place, and the FPGA vendor probably got that information from PT anyway. I'm not suggesting that you don't run STA, as well as a timing sim for anything that's not trivial. EvanArticle: 122702
There is a huge difference however. The cell processors and GPUs, as complex as they are still have a limited number of instructions a compiler has to contend with. An FPGA is an open slate, which affords far more flexibility in the design, but at the price of making it far more difficult to automate the design process. Remember, no matter how you wrap it, FPGA design is digital circuit design. On the other hand, GPU's, cell processors and the like are predefined hardware whose function is one of a very few pre-determined operations strung together as a sequence of operations. I think it will still be a long time before you get the "programming model" you seek in an FPGA. With current design methodology, getting there discards the flexibility that makes the FPGA perform so well on HPC applications.Article: 122703
Philipp Klaus Krause <pkk@spth.de> writes: > A video game cartridge. Containing the EPROM (program and data), EEPROM > (for savegames) and CPLD (bank switching, access to I²C). Even if there was a single-chip solution for that (which there isn't), it would cost *much* more than a multichip solution.Article: 122704
Clark, I asked around, and no one had anything handy to do this. This is their recommendation: Shutdown sequence: 1) Trigger the shutdown sequence with an interrupt. 2) The shutdown routine should save all registers and state. It must not use the stack. Next, copy all of memory to a non-volatile storage device. 3) Write to a non-volatile memory location a flag that at next boot a warm-boot should be performed. I recommend a 32 bit word with a unique pattern. Ex: 1234_CDEFh 4) Processor then signals the power supply to turn off. Restart sequence: 1) At boot the processor checks the warm-boot flag. If set follow the steps below. Again, don't use the stack. 2) Copy the saved memory from the non-volatile storage device to system memory. 3) Restore all saved processor registers. 4) Execute a return from interrupt. This assumes power is lost after suspend. If power is not lost, then there is no need for the non-volatile memory. AustinArticle: 122705
Evan Lavelle wrote: > I'm not sure that I made myself clear - my point was that (a) it must > surely be a lot easier for an FPGA vendor to write an FPGA timing > analyser, than for an EDA vendor to write a 'real' analyser, and (b) > that, at first sight, it seems pointless to additionally run PT as > well as the FPGA vendor tools. That was clear. I was on a tangent about eliminating the need for complex timing constraints in the first place by better structuring the design. -- Mike TreselerArticle: 122706
Or you could do pretty much what is given in 1, 2, 3, and 4 except overwrite the address that the bootloader loads from to the first "resume" instruction. But, Austin's answers seems more robust assuming you clear the "warm boot" flag in step 5 of the resume process. ---Matthew Hicks > Clark, > > I asked around, and no one had anything handy to do this. This is > their recommendation: > > Shutdown sequence: > 1) Trigger the shutdown sequence with an interrupt. > 2) The shutdown routine should save all registers and state. It must > not use the stack. Next, copy all of memory to a non-volatile storage > device. > 3) Write to a non-volatile memory location a flag that at next boot a > warm-boot should be performed. I recommend a 32 bit word with a > unique > pattern. Ex: 1234_CDEFh > 4) Processor then signals the power supply to turn off. > Restart sequence: > 1) At boot the processor checks the warm-boot flag. If set follow the > steps below. Again, don't use the stack. > 2) Copy the saved memory from the non-volatile storage device to > system > memory. > 3) Restore all saved processor registers. > 4) Execute a return from interrupt. > This assumes power is lost after suspend. If power is not lost, then > there is no need for the non-volatile memory. > > Austin >Article: 122707
One final note, This assumes you copy the stack into the external memory when suspending, and the copy the external memory back into the stack before executing the "return" instruction. The comment "do not use stack" just implies that the suspend, and restart code should save the stack, and restore the stack, and not use any subroutine calls (as that would make managing the stack pointer and stack contents more difficult). Austin Matthew Hicks wrote: > Or you could do pretty much what is given in 1, 2, 3, and 4 except > overwrite the address that the bootloader loads from to the first > "resume" instruction. But, Austin's answers seems more robust assuming > you clear the "warm boot" flag in step 5 of the resume process. > > > ---Matthew Hicks > > >> Clark, >> >> I asked around, and no one had anything handy to do this. This is >> their recommendation: >> >> Shutdown sequence: >> 1) Trigger the shutdown sequence with an interrupt. >> 2) The shutdown routine should save all registers and state. It must >> not use the stack. Next, copy all of memory to a non-volatile storage >> device. >> 3) Write to a non-volatile memory location a flag that at next boot a >> warm-boot should be performed. I recommend a 32 bit word with a >> unique >> pattern. Ex: 1234_CDEFh >> 4) Processor then signals the power supply to turn off. >> Restart sequence: >> 1) At boot the processor checks the warm-boot flag. If set follow the >> steps below. Again, don't use the stack. >> 2) Copy the saved memory from the non-volatile storage device to >> system >> memory. >> 3) Restore all saved processor registers. >> 4) Execute a return from interrupt. >> This assumes power is lost after suspend. If power is not lost, then >> there is no need for the non-volatile memory. >> >> Austin >> > >Article: 122708
"Dolphin" <Karel.Deprez@gemidis.be> wrote in message news:1185955958.976050.126800@o61g2000hsh.googlegroups.com... > Hello, > > We have implemented a variable phase shift in a spartan 3E device. The > phase shift can be set with a register. Normally the PSDONE signal > should go high when a phase shift is performed. This happens but takes > a long time (several minutes). The datasheet says : > > "The phase adjustment might require as many as 100 CLKIN > cycles plus 3 PSCLK cycles to take effect, at which point the > DCM's PSDONE output goes High for one PSCLK cycle. > This pulse indicates that the PS unit completed" > > However it seems that our design is much slower... > The DCM that does the phase shift gets its clock from another DCM. > Could it be that there is too much jitter on this clock? > > Anybody had a similar problem? > > Thanks and best regards, > Karel Deprez Sorry I missed your post - business travel for a couple days. My experience with the Spartan 3E shows that under some circumstances, the PSDONE is delayed. I was doing a significant number of phase tweaks back and forth under some conditions with one source behaving quite different than another (different designs, same effective signal). It seems - from discussions with people deep within Xilinx while pursuing these troubles - the phase shift is held off because the "Input clock is jittery with some special jitter pattern. The skew filter kicks in, managing DLL updates based on short term avg of input phase. Variaple updates are also delayed with other DLL updates. I'd suggest you call the apps hotline and indicate you're having troubles with the PSDONE arriving in a timeliy manner, and that you'd like "a back door code to turn off skew filter (using a -g option in)." You'll need to provide the exact DCM location you are using for that option. "Then PSEN->PSDONE cycles will be roughly 10 deterministic cycles." While it was suggested that I could discontinue the -g option once I debugged the problem, the "problem" as I saw it was the DCM wasn't working for my input signal. I never changed the option back, but we are currently troubleshooting around that same front end. It may be the very nature of the jitter on the DCM feeding your second DCM is why there are troubles getting the skew filter to cooperate. In my own measurements, I ran a histogram of the number of cycles it took for my system to generate PSDONE. The results suggested that - in my circumstance - the deterministic data sheet values were not reliable. With the specific -g option, the system worked and worked solid. What was most fun was watching the DCM lose lock (without saying so) when I'd retry the PSEN after not receiving it for too long at the same cycle the PSDONE asserts (512 cycles later, for one timeout setting). By just retrying the PSEN, I'd eventually see the original PSDONE but when they tripped on each other in this strange situation, there might be lock troubles. The unreported lock loss was always preceeded by the PSEN reissue coincident with PSDONE report. I don't know if the inverse was true (it probably was not, given the number of delays extending this far). - John_HArticle: 122709
FPGA Central is the central place to find complete information about FPGA Vendors, FPGA Products, IPs, FPGA Events, FPGA News and so much more. Field Programmable Gate Array aka FPGAs is a semiconductor device containing programmable logic components called "logic blocks", and programmable interconnects. They are especially popular for prototyping integrated circuit designs. Once the design is set, hardwired chips are produced for faster performance. Its flexibility and low cost of implementation is the key factor for its success. Inspite of its gaining popularity over other semiconductor devices such as ASICs, Full Custom chips and Processors, there was no place to find all FPGA related information on the Internet. FPGA Central (http://www.fpgaCentral.com) aims to solve that problem by bringing all FPGA related information under one central location and more. FPGA Central is a vendor neutral place to find everything about FPGAs, CPLDs, EDA Tools for FPGAs. "There are websites which either provide FPGA Blogs or small list of links from around the web, but none of these sites provide a complete portal for FPGA users. FPGA Central is created to provide a central place for FPGA Vendors & Users to share experiences and information about FPGA Design, Development, Verification, Validation, Process, Tools and Products." The sites major feature includes: * Largest FPGA Vendor Directory (over 450) - (http:// www.fpgacentral.com/vendor/directory) * Vast Product Directory (over 800) - (http://www.fpgacentral.com/ product/catalog) * IP Catalog (over 750) - (http://www.fpgacentral.com/ip/catalog) * Find FPGA Events around the world - (http://www.fpgacentral.com/ event) * FPGA Discussion Forums - (http://www.fpgacentral.com/forum) * Latest FPGA News & Press Release * FPGA Jobs listing "We are enhancing user experience by providing Reviews & Ratings of all the listing on our web site. The users are encouraged to discuss their problems and share thoughts." And this is just the beginning, over next couple of months FPGA Central plans to add more features to help users to make more informative decisions and vendors to showcase their products. It is a WIN-WIN situation for all. For a limited time all the VLSI enthusiasts around the world can WIN FREE gifts like iPod, T-shirts, FPGA books & more. All you need to do is register for a FREE account with FPGA Central. About FPGA Central: FPGA Central is the world's 1st FPGA Portal. It is created to provide a central place for FPGA Vendors & Users to share experiences and information about FPGA Design, Development, Verification, Validation, Process, Tools and Products. For more information visit FPGA Central website at www.fpgacentral.com or email "info at fpgacentral.com" ### http://www.fpgacentral.com/pr/2007/worlds-1st-fpga-centric-portal-goes-live http://www.edn.com/pressRelease/2140062246.htmlArticle: 122710
On Aug 2, 3:41 pm, jjohn...@cs.ucf.edu wrote: > Good luck! > > I think the fastest LVDS serdes you can do in StratixII (non-GX) is > 840 Mbps nominal, and 1Gbps with their ALT_LVDS macro and DPA (dynamic > phase alignment) turned on. > > If anyone can get it running faster, I look forward to seeing how. I'm stucked on Stratix II and Virtex 5. Fortunately I can use multiple LVDS from both Stratix and Virtex, so I guess I can create a pack of 1 Gbps "lanes" which could finally transfer 3Gbps isn't it ? thx, VasileArticle: 122711
"austin" <austin@xilinx.com> wrote in message news:f902co$gp31@cnn.xilinx.com... > Clark, > > I asked around, and no one had anything handy to do this. This is their > recommendation: > > Shutdown sequence: > 1) Trigger the shutdown sequence with an interrupt. > 2) The shutdown routine should save all registers and state. It must > not use the stack. Next, copy all of memory to a non-volatile storage > device. > 3) Write to a non-volatile memory location a flag that at next boot a > warm-boot should be performed. I recommend a 32 bit word with a unique > pattern. Ex: 1234_CDEFh > 4) Processor then signals the power supply to turn off. > > Restart sequence: > 1) At boot the processor checks the warm-boot flag. If set follow the > steps below. Again, don't use the stack. > 2) Copy the saved memory from the non-volatile storage device to system > memory. > 3) Restore all saved processor registers. > 4) Execute a return from interrupt. > > > > This assumes power is lost after suspend. If power is not lost, then > there is no need for the non-volatile memory. > > Austin Yes, this is the right sequence. In our case we have sdram that we plan to put in self refresh mode (another issue I'm not sure how to do from the xilinx ddr core?) so there's no need for the NV memory. I don't know if I mentioned that we're using Linux 2.6. There are a lot of architectures in the kernel tree that have suspend/resume functions but nothing for ppc405. Since every V2Pro or V4fx user needs this same function I'm surprised there isn't some mature code out there. Thanks, ClarkArticle: 122712
On Aug 3, 1:18 pm, Ray Andraka <r...@andraka.com> wrote: > There is a huge difference however. The cell processors and GPUs, as > complex as they are still have a limited number of instructions a > compiler has to contend with. The reality with "huge computations" is that they are computations. When that "huge computation" is more memory that fits inside an FPGA, then the fpga design has to go to external memory ... slow. When huge problems are matched to huge caches and fast effective cycle times you can not get there with an FPGA when the ASIC (a cpu + cache) is matched to the work load. Finally there are low cost SIMD/MIMD engines with high memory bandwidth and well integrated as coprocessors. The nVidia CUDA tools really gut large iron solutions in both price and performance for many fairly large applications. > An FPGA is an open slate, which affords > far more flexibility in the design, but at the price of making it far > more difficult to automate the design process. Yep ... more difficult, but not impossible. > Remember, no matter how > you wrap it, FPGA design is digital circuit design. Nope .... wrapped well it's just an execution environment. > On the other hand, > GPU's, cell processors and the like are predefined hardware whose > function is one of a very few pre-determined operations strung together > as a sequence of operations. True ... the few pre-determined ones necessary to handle a "huge computation" and any other flexibility for that task is a moot point. > I think it will still be a long time before you get the "programming > model" you seek in an FPGA. It's already here. Been here for a while in Streams-C, Mitron-C, Handel-C, .... and a few high level applications lanaguages. > With current design methodology, getting > there discards the flexibility that makes the FPGA perform so well on > HPC applications. Doesn't discard it ... simply doesn't use it when it's not needed. The flexibility with FPGA's is that you can do more if you need to, the down side is that if you are forced to external memories the caches in GPU/CPU's are a HUGE win.. Most of the time using an accellerator isn't about getting 100% from it, it's simply being X times faster than the host CPU. Many customers can use an FPGA cost effectively, only getting 30% of it's best case performance, and be extremely happy that it's 10X the host processor. For the "huge computation" project these guys are doing .... and I've seen the code ... it's an ideal nVidia CUDA application .... $600 or less in hardware, and a few weeks programming work. A fraction the cost of doing it in an FPGA, and it's highly likely the nVidia GPU will be much faster. Especially since the application is a vision project from live sensor data, and the GPU has the display on board. A year ago, and FPGA would have been the only choice.Article: 122713
Hello, I would like to utilize a controller for a SINGLE data rate SDRAM (Micron MT48LC16M16A2TG-75, to be specific). In the past I've used Xilinx' MiG 1.4 to obtain a DDR2 controller, which I ended up pretty happy with (after forgetting the via dolorosa to set it up...). Its main benefit is a simple and convenient FIFO-based user interface. For some reason, I thought that MiG would create an SDR controller as well (it's simpler, after all), but it turned out I'm very wrong: The last piece of attention on Xilinx' behalf to SDR, which I've managed to find, is xapp134. That paper, along with its HDL code, originates in 1999, and is more or less the same ever since. The controller offered is hence adapted to Virtex-I and Spartan-II, and is yucky is several respects. Newer application notes (as well as MiG) relate to faster memory classes (DDR, DDR2, QDR, you name it), with controllers eating up some clock resources to solve timing problems. And all I wanted was a cheap memory with reasonably simple access. Given the situation, I'm considering to create a DDR controller with MiG for a memory with similar attributes (bus width, array size, etc) and then hack it down to SDR. Since the command interface is the same, that should leave me with changing the data flow, and make the burst timing right. Not much fun, but hey, after debugging the MiG DDR controller, I should survive this one as well. And here's the irony: I picked this SDRAM to make things simpler for me. So before I start this little self torture, does anyone have a better idea? Thanks, EliArticle: 122714
Eli Billauer wrote: > And here's the irony: > I picked this SDRAM to make things simpler for me. > So before I start this little self torture, does anyone have a better > idea? I would consider a revision to the circuit board. How long will a MT48LC16M16 be stocked by distributors? -- Mike TreselerArticle: 122715
On Aug 4, 3:38 pm, Mike Treseler <mike_trese...@comcast.net> wrote: > I would consider a revision to the circuit board. > How long will a MT48LC16M16 be stocked by distributors? Well, the board is already assembled and under initial hardware checkups. So it's a bit late for that... And honestly, I can't see why DDR should be used when the bandwidth isn't required. From my previous experience, it's far more complicated than talking with an SDR chip, which is more or less like any synchronous chip. EliArticle: 122716
Aha! So two chances to get a controller. Will try this ! Thanks to both of you!Article: 122717
NorthWest Logic provides a SDR controller for the Spartan-3E. Xilinx only provides DDR and more complicated reference designs. More info here: http://www.xilinx.com/xlnx/xebiz/designResources/ip_product_details.jsp?key=SDR_SDRAM&sGlobalNavPick=&sSecondaryNavPick= Cheers JacoArticle: 122718
I recently aquired 9 quantities of XC4VLX40-10FFG1148C from a company. The ICs are in original sealed envelope (not opened). Reference: http://www.digikey.com/scripts/DkSearch/dksus.dll?Detail?name=122-1491-ND Please let me know whether anyone will by interested to buy this from me. Thanks Mahendra VarmanArticle: 122719
hi, we are trying to run the "Virtex4_PPC_Example_9_1" example program (that came along with the EDK9.1.2 kit) that has TestInterrupt.c routine (timer interrupt driving the LEDs and UART interface displaying the status.) "We are able to see LEDs flashing on the board but not able to speed up or slow down the clock using the 'f' , 's' inputs through the hyperterminal, which we are expected to do through the hyperterminal. we have installed the EDK9.1i on ubuntu 2.6.17 , and installed the drivers for parallel port , "windrvr6" and "xpc4drvr" and able to update the bitstream on XPS. We are also able to establish the connection to the virtex-4 Fx12 LC development board, (the message log says so. ) but we are getting an error message mentioning the following, which we could not google out successfully. "INFO:iMPACT:501 - '1': Added Device xc4vfx12 successfully. ---------------------------------------------------------------------- ---------------------------------------------------------------------- Version is 1111 '2': : Manufacturer's ID =Xilinx xcf08p, Version : 15 INFO:iMPACT:1777 - Reading /opt/Xilinx91i/xcfp/data/xcf08p.bsd... INFO:iMPACT:501 - '1': Added Device xcf08p successfully. ---------------------------------------------------------------------- ---------------------------------------------------------------------- done. Elapsed time = 0 sec. Elapsed time = 0 sec. ERROR:iMPACT:1062 - Can only assign files to devices between positions 1 to 2 ---------------------------------------------------------------------- ----------------------------------------------------------------------" we opened the "impact" through command prompt and tried to configure the bitstream, by selecting various combination of the "download.bit" , "system.bit", , the "TestInterrupt.elf " "system_bd.bmm" and "system.bmm". It will be great if some one can through us some ideas as to what could have gone wrong, or if some one has implemented that example on ubuntu and virtex-4 successfully. thanks and regards vijayArticle: 122720
hi, we are trying to run the "Virtex4_PPC_Example_9_1" example program (that came along with the EDK9.1.2 kit) that has TestInterrupt.c routine (timer interrupt driving the LEDs and UART interface displaying the status.) "We are able to see LEDs flashing on the board but not able to speed up or slow down the clock using the 'f' , 's' inputs through the hyperterminal, which we are expected to do through the hyperterminal. we have installed the EDK9.1i on ubuntu 2.6.17 , and installed the drivers for parallel port , "windrvr6" and "xpc4drvr" and able to update the bitstream on XPS. We are also able to establish the connection to the virtex-4 Fx12 LC development board, (the message log says so. ) but we are getting an error message mentioning the following, which we could not google out successfully. "INFO:iMPACT:501 - '1': Added Device xc4vfx12 successfully. ---------------------------------------------------------------------- ---------------------------------------------------------------------- Version is 1111 '2': : Manufacturer's ID =Xilinx xcf08p, Version : 15 INFO:iMPACT:1777 - Reading /opt/Xilinx91i/xcfp/data/xcf08p.bsd... INFO:iMPACT:501 - '1': Added Device xcf08p successfully. ---------------------------------------------------------------------- ---------------------------------------------------------------------- done. Elapsed time = 0 sec. Elapsed time = 0 sec. ERROR:iMPACT:1062 - Can only assign files to devices between positions 1 to 2 ---------------------------------------------------------------------- ----------------------------------------------------------------------" we opened the "impact" through command prompt and tried to configure the bitstream, by selecting various combination of the "download.bit" , "system.bit", , the "TestInterrupt.elf " "system_bd.bmm" and "system.bmm". It will be great if some one can through us some ideas as to what could have gone wrong, or if some one has implemented that example on ubuntu and virtex-4 successfully. thanks and regards vijayArticle: 122721
(reposting with catchier subject line and some changes...) Hi, We are trying to run the "Virtex4_PPC_Example_9_1" example program from EDK9.1.2 that has TestInterrupt.c routine (timer interrupt driving the LEDs and UART interface displaying the status.) The LEDs on the board appear to flash quickly, but we're unable to speed up or slow down the clock using the 'f' , 's' inputs via the serial port. Software: EDK9.1i on linux 2.6.17, parport jtag, "windrvr6" and "xpc4drvr" drivers and devices all there. We are able to update the bitstream from XPS. We are also able to establish the connection to the virtex-4 Fx12 LC development board, (the message log says so. ) but we are getting the following error: ********************** ERROR:iMPACT:1062 - Can only assign files to devices between positions 1 to 2 ********************** (more of the logs are below.) ********************** Google says nothing about "files to devices between"! We ran "impact" through tje command prompt and tried to down the bitstream, by selecting various combinations of ("download.bit", "system.bit"), "TestInterrupt.elf " and ("system_bd.bmm" and "system.bmm"). Has anyone ever seen this? Any suggestions? thanks and regards vijay ********************************************* More detailed message ********************************************* impact -batch etc/download.cmd Release 9.1.02i - iMPACT J.33 Copyright (c) 1995-2007 Xilinx, Inc. All rights reserved. AutoDetecting cable. Please wait. Reusing 78018001 key. Reusing FC018001 key. Connecting to cable (Parallel Port - parport0). WinDriver v8.11 Jungo (c) 1997 - 2006 Build Date: Oct 18 2006 X86 32bit 12:08:05. parport0: baseAddress=0x378, ecpAddress=0x778 LPT base address = 0378h. ECP base address = 0778h. ECP hardware is detected. Cable connection established. Connecting to cable (Parallel Port - parport0) in ECP mode. Cannot access /dev/xpc4_0 - No such file or directory. Cable connection failed. Connecting to cable (Parallel Port - parport0). WinDriver v8.11 Jungo (c) 1997 - 2006 Build Date: Oct 18 2006 X86 32bit 12:08:05. LPT base address = 0378h. ECP base address = 0778h. Cable connection established. ECP port test failed. Using download cable in compatibility mode. Identifying chain contents ....Version is 0000 '1': : Manufacturer's ID =Xilinx xc4vfx12, Version : 0 INFO:iMPACT:1777 - Reading /opt/Xilinx91 i/virtex4/data/xc4vfx12.bsd... INFO:iMPACT:501 - '1': Added Device xc4vfx12 successfully. ---------------------------------------------------------------------- ---------------------------------------------------------------------- Version is 1111 '2': : Manufacturer's ID =Xilinx xcf08p, Version : 15 INFO:iMPACT:1777 - Reading /opt/Xilinx91i/xcfp/data/xcf08p.bsd... INFO:iMPACT:501 - '1': Added Device xcf08p successfully. ---------------------------------------------------------------------- ---------------------------------------------------------------------- done. Elapsed time = 0 sec. Elapsed time = 0 sec. ERROR:iMPACT:1062 - Can only assign files to devices between positions 1 to 2 ---------------------------------------------------------------------- ---------------------------------------------------------------------- Done!Article: 122722
Hello, I'm trying to simulate a C program in MB-GDB but I get trouble when trying to simulate memory based I/O devices. I'm not using Microblaze but a improved Openfire core (same ISA) so microblaze toolchain is working for me (I also modified some TCL script inside mb-gdb in order to use the gdb's microblaze simulator instead of XMD). Seems that GDB marks or initialize only memory regions defined in the executabe program(sections .text, .rodata, .data, etc..) and when my program tries to access the memory area where I/O devices are mapped (outside .text, .data, etc. regions), GDB raises SIGSEGV (segmentation violations signals). example: led_status = *(char *) IODEVICE_LEDS; // IODEVICES start at 0x80000000 but program resides from 0x0 to 0x1000 A solution I found is to define the IO device memory area at the end of the linker script and initialize it with some data. . = _IO_START_ADDR; _iospace : { . = _IO_SIZE; } This seems to work more or less but I need to execute the program with breakpoints and manually set / read the values of such memory positions in order to simulate the device behaviour. The question is: Is there any way in GDB to map a memory location (8, 16 o 32 bit size) to a file for input or output (this is, every time I read[write] from such memory location, the value is readed[writed] from[to] a file ? Is also possible to use an script connected to such memory location (for example interactively controlling the position of switches or so). Thank you very much for your help ManuelArticle: 122723
I am reading some papers about algorthms implementation. I noticed they like to compare the synthesis area of the fixed-point implementation. I wander where they find the area report for the implementation. I only see the slice percent report. I am using the Xilinx ISE. I want to know where to generate the area report. Any comment is welcome.Article: 122724
> The Xilinx > Memory Interface Generator (MIG) generates ready-to-use HDL for the > Spartan-3E Startker Kit. I downloaded the reference design but did only find verilog files which i cannot use. the re also ssems to be an ngc file missing, which is nowhere in the zip archiv. Isn't there a working project for ISE 8/9 which directly compiles and produces some test ? ---- Concerning the other design: I found the .../dash/ocddr design. This is one file, which I do not know how to handle. I had been able to seperate the files, but appart from the fact, that there is a reset module missing, (and a debris of a vga module) I do not know how to start with it and where. ISE imports it an dshows the hirarchy but this is all. ---
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z