Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On Wed, 16 Jun 2004 08:39:17 -0400, salman sheikh wrote: > Hello, > > I just installed Xilinx ISE 6.2i on a Linux box and it is sluggish as > anything. Does anyone know why? I am running on a P4 1.7GHz w/ 1GB of > RAM. On windows, it is much more zippy. Could it be the gui toolkit > that Xilinx is using (it seems like JAVA.......slow as a slug....)? > > Thanks. > > > Salman I'm surprised that they ran on SUSE 9.1 at all. The GUI tools don't work on Mandrake 10.0, I'm still using Mandrake 9.2 on my workstation because of this. The only GUI tool I ever use is FPGA Editor and that works OK even though I'm using an old machine, 500MHz PIII with 512M RAM. I do everything else with CLI and that works fine. The thing that you absolutely can't do is run the GUI tools remotely, the performance over an ethernet is horrendous. Everything else I use, Cadence's NC, Mentor's ModelSim, both work fine on Mandrake 10 and there is no performance penalty when running them over a network. Hopefully Xilinx will switch to a decent toolkit in future releases, one that isn't tied to a particular distribution and that has reasonable performance.Article: 70501
Hi John, > Forgive me if this has been asked before, but does anybody have > comments or links to simple methods of compressing/decompressing > Xilinx configuration bitstreams? Can't help you on the Xilinx front, but many of Altera's newest chips (Cyclone, Stratix II) support on-the-fly decompression of the bitstream. The Quartus software compresses the bitstream which is then programmed into the device using pretty much any of the many methods of programming available, and the chip's configuration controller will decompress the bitstream that it sees. This typically achieves a 1.9-2.3:1 compression ratio, depending on the device utilization, RAM contents and such. Some of our programming devices also can decompress bitstreams on-the-fly, allowing bitstream compression for other chip families that do not support decompression internally. See the Configuration Handbook Volume 2 (http://www.altera.com/literature/hb/cfg/cfg_volume2.pdf) for a detailed description of device programming and compression options. Regards, Paul Leventis Altera Corp.Article: 70502
Paul Leventis (at home) wrote: > Hi John, > > >>Forgive me if this has been asked before, but does anybody have >>comments or links to simple methods of compressing/decompressing >>Xilinx configuration bitstreams? > > > Can't help you on the Xilinx front, but many of Altera's newest chips > (Cyclone, Stratix II) support on-the-fly decompression of the bitstream. > The Quartus software compresses the bitstream which is then programmed into > the device using pretty much any of the many methods of programming > available, and the chip's configuration controller will decompress the > bitstream that it sees. This typically achieves a 1.9-2.3:1 compression > ratio, depending on the device utilization, RAM contents and such. <Snip> Has anyone taken the simple step of run ZIP on some Altera compressed files, to see how much more compression is possible ? -jgArticle: 70503
On Thu, 17 Jun 2004 16:29:49 -0700, "Steven K. Knapp" <steve.knappNO#SPAM@xilinx.com> wrote: > >"Allan Herriman" <allan.herriman.hates.spam@ctam.com.au.invalid> wrote in >message news:drb3d09tn390n8g3m7a0fdebqqapvc20r9@4ax.com... > >> [Allan calls it a bug] >> > > [Steve calls it a feature] That's pretty much the response I expected. Regards, Allan.Article: 70504
>Commandline tools fly, we place and route on a duel hyperthreded xeon (4 >logical cpus) and setting 4 designs off in parallel gives impressive >performace, made the time spent building large Makefiles worth while. Makefiles are good anyway. Think of them as documentation. Beats scraps of paper with notes about what you have to do in the GUI to get the right answer. Doubly so if some of the GUI flags are sticky so you only have to do it "once" and it doesn't get added to the checklist. -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 70505
On Thu, 17 Jun 2004 22:09:31 GMT, "Clark Pope" <cepope@mindspring.com> wrote: >The bit generation tool has an option to compress the .bit file. I use this >when I'm loading over JTAG to save time. I assume Xilinx has info on in >system programming with a compressed .bit file. This 'compression' merely merges identical frames. The probability of getting identical frames in a well utilised FPGA isn't very high, so this doesn't result in much reduction in file size. Some experiments I did a few years ago (on Virtex-E and Virtex-2 files) indicated that the this compression made subsequent compression by tools such as gzip *worse*. It is, however, the only way to speed up JTAG loading. Regards, Allan.Article: 70506
General Schvantzkoph wrote: > On Wed, 16 Jun 2004 08:39:17 -0400, salman sheikh wrote: > > >>Hello, >> >>I just installed Xilinx ISE 6.2i on a Linux box and it is sluggish as >>anything. Does anyone know why? I am running on a P4 1.7GHz w/ 1GB of >>RAM. On windows, it is much more zippy. Could it be the gui toolkit >>that Xilinx is using (it seems like JAVA.......slow as a slug....)? >> >>Thanks. >> >> >>Salman > > > I'm surprised that they ran on SUSE 9.1 at all. The GUI tools don't work > on Mandrake 10.0, I'm still using Mandrake 9.2 on my workstation because > of this. The only GUI tool I ever use is FPGA Editor and that works OK > even though I'm using an old machine, 500MHz PIII with 512M RAM. I do > everything else with CLI and that works fine. The thing that you > absolutely can't do is run the GUI tools remotely, the performance over > an ethernet is horrendous. Everything else I use, Cadence's NC, > Mentor's ModelSim, both work fine on Mandrake 10 and there is no > performance penalty when running them over a network. Hopefully Xilinx > will switch to a decent toolkit in future releases, one that isn't tied to > a particular distribution and that has reasonable performance. > I also couldn't run ISE 6.2i on Mandrake 10.0 but would be interested in trying under wine. Can some one point me to an install procedure to get in working under wine. Thanks, TomArticle: 70507
How about Abstract Algebra? The math needed to understand Reed-Solomon, BCH, and other codes is not intuitive--at least for most of us. The "rules" are a little different over finite fields.... On a more basic note, you need math to budget your interfaces (how much bandwidth is required, etc.) and, of course, your parts lists. And let's not neglect FFTs, and other digital signal processing opportunities. Jason "Hendra Gunawan" <u1000393@email.sjsu.edu> wrote in message news:cao5ji$7epp0$1@hades.csu.net... > How about digital logic design engineer? What kind of math required other > than basic arithmetic? And don't they need a lot less math than say an RF > engineer? > > Hendra > >Article: 70508
I'm having trouble accessing OCM peripherals from Linux. The symptom is a machine check exception whenever the OCM address space is read or written. This code is run from a kernel module: unsigned int* p = (unsigned int*)ioremap(0x40000000,0x1000); printk("%08x\n",p[0]); When the OCM address of 0x40000000 is replaced with, say, the UART at 0xA0000000, things work as expected: can peek and poke the UART registers with no problem. How is the OCM different from a PLB peripheral from Linux's point of view? The cacheability registers, DCCR and ICCR, are correctly set by Linux to 0xf0000000, so only addresses under 512 MByte are cacheable (currently occupied by DRAM), and OCM is well outside this range. (We use only DSOCM.) If the ioremap() address is replaced with something nonexistent, such as 0x50000000, then the same type of machine exception occurs. It's as if the OCM didn't exist. But running in real mode works fine: a small test application running from PLB block RAM can see all of the OCM resources. This is a 2VP20, processor version number 0x200108a0. I plan to open a Xilinx hotline case, but thought I'd ask in the newsgroup for good measure. Cheers, Peter Monta RGB Networks, Inc.Article: 70509
Hi,David Did you use the EMAC core in EDK, I heard that it will be unstable unless you buy a core from Xilinx.Article: 70510
Synthesisers usually remove unused inputs. One crude way that works with all synthesiser is to use the unused inputs in a dummy function. This dummy function must be used somewhere but if you make it so that is always '1' or '0' and use that value where you don't care then that will be enough. If you have a spare pin you can always attached the function to that. A typical dummy function might be an OR gate of all unused signals possibly with some real signals as well to ensure a always '1' output. Similarly use AND gate for '0'. John Adair Enterpoint Ltd. - Home of Broaddown2. The Ultimate Spartan3 Development Board. http://www.enterpoint.co.uk This message is the personal opinion of the sender and not that necessarily that of Enterpoint Ltd.. Readers should make their own evaluation of the facts. No responsibility for error or inaccuracy is accepted. "Matt Dykes" <mattdykes@earthlink.net> wrote in message news:8a89c2c3.0406171251.42e240ca@posting.google.com... > XST keeps removing some of my input pins. I have used the LOC > attribute in both the VHDL itself and in the .ucf file to no avail. > After PAR, the pins declared in the port declaration (and LOCed to > specific pins) have not been routed to a pad. > > The inputs ARE used by latching into a register that is later read. > Any thoughts on why XST is removing them and how to make it stop? > Thx-MattArticle: 70511
Jim Granville <no.spam@designtools.co.nz> wrote in message > We did some work with Run length compression, which is > very simple (simple enough to code into CPLD), but has medium > compression gains. ISTR about half the gains of ZIP ? Interesting. A student of mine did the same as a semester project. The goal was to find a compression method simple enough that the logic to programm an FPGA from a NOR-Flash would fit into an XC9536. For XC4K FPGAs he was very successfull. He achieved a compression in the range of 50% to 65% size compared to the original with runlength encoding of 1s only. This is almost as good as zip. The Virtex family seems to use it's configuration bits a lot more efficiently. (encoded switch configurations ?). He could not find any simple solution for those. Kolja SulimmaArticle: 70512
<sanpab@eis.uva.es> wrote > Hi all, > > I have installed ISE Foundation (6.2.03) and tried to generate an > EDIF file from a Verilog file. I remember I could do it a few years > ago with 4.x version (not sure). > > I need to install something special? > > Anyway, what happens if the design has parameters? The EDIF file > allow it? > > Thanks in advance, Santiago. Hi, synthesize your design with xst and then use ngc2edif.exe from command line to translate the *.ngc netlist to EDIF format. Another option is to run the 'translate' (ngdbuild) step after synthesis and then use ngd2edif.exe from command line. /MichaelArticle: 70513
Kolja Sulimma wrote: > Jim Granville <no.spam@designtools.co.nz> wrote in message > >> We did some work with Run length compression, which is >>very simple (simple enough to code into CPLD), but has medium >>compression gains. ISTR about half the gains of ZIP ? > > > Interesting. A student of mine did the same as a semester project. > The goal was to find a compression method simple enough that the logic > to programm an FPGA from a NOR-Flash would fit into an XC9536. > > For XC4K FPGAs he was very successfull. He achieved a compression in > the range of 50% to 65% size compared to the original with runlength > encoding of 1s only. > This is almost as good as zip. > > The Virtex family seems to use it's configuration bits a lot more > efficiently. (encoded switch configurations ?). He could not find any > simple solution for those. Maybe you could have allowed him to also use a 9572 ( or XC2C64) ? If there was a big change between families, it sounds like Xilinx followed the same path, and did a simple reduction in bit-encode with some small RLC - after all, a 9536 level resource will be miniscule in a FPGA.Article: 70514
Dear Steve/Allan: What would be needed in my Picoblaze IDE to support Verilog. Please let me know, so when I find the time I can add that. Regards, Henk van KampenArticle: 70515
On 18 Jun 2004 02:43:41 -0700, henk@mediatronix.com (Henk van Kampen) wrote: >Dear Steve/Allan: >What would be needed in my Picoblaze IDE to support Verilog. Please >let me know, so when I find the time I can add that. Hi Henk, it just needs to be able to generate the file containing the ROM contents in Verilog instead of VHDL. Kcpsm3 generates files in both languages, perhaps you could study what it does. This doesn't help the OP though, as the core itself is written in VHDL. Steve, would there be any problem if a third party (e.g. me) were to publish a behavioural Verilog description of picoblaze[123]? Regards, Allan.Article: 70516
Does anyone know if there are concrete plans for a Nios II port of eCos? There are several other OS'es ported to the Nios II already, such as uC/OS-II and ucLinux, but I've heard nothing regarding eCos other than four-year-old promises that Altera and Red Hat were working on it. -- David "I love deadlines. I love the whooshing noise they make as they go past." Douglas AdamsArticle: 70517
Hello, I implemented in a CPLD a very simple 8bit output port The CPLD connects to the microcontroller bus and this part is simply a latch that is controlled by the address bus and /wr signal. Couldn't be simpler. I had a 'testpoint' out of the CPLD just to check when this latch is gated. All works fine. When I sad to myself that I no longer need the testpoint, and removed the VHDL line, the port stopped to work !!! I changed nothing else !!!! Put back the test point and the port worked again !!! This is crazy ! Anyone has a clue about what is happening ? Help is appreciated, I may leave the testpoint there but I want to understand what is going on ! (using Max+Plus II 10.2BL and a MAX7128SQC100-15) the relevant part of the VHDL code follows: -------------------------------------------- ... -- this is the test point I'm talking about inspecting the "write_to_the_port" signal PF(0) <= ga_class_wr; -- end of debug area octnr <= conv_integer( unsigned( a(10 downto 8) ) ); ga_class_wr <= '1' WHEN ((a(15 downto 11) = GA_PORTS) AND nwr='0') ELSE '0'; ga_class_rd <= '1' WHEN ((a(15 downto 11) = GA_PORTS) AND nrd='0') ELSE '0'; PA<= par; PB<= pbr; PE<= per; pcr<=PC; pdr<=PD; -- latching cpu bus writes cpuio: PROCESS(ga_class_rd,ga_class_wr,d) VARIABLE wrdata: STD_LOGIC_VECTOR (7 downto 0); BEGIN IF (ga_class_wr='1') THEN CASE octnr IS WHEN 0 => par<=d; WHEN 1 => pbr<=d; WHEN 4 => per<=d; WHEN OTHERS => END CASE; end if; IF (ga_class_rd='1') THEN CASE octnr IS WHEN 2 => wrdata := pcr; WHEN 3 => wrdata := pdr; WHEN OTHERS => END CASE; d <= wrdata; ELSE d <= "ZZZZZZZZ"; END IF; END PROCESS cpuio;Article: 70518
Marc Kelly <marc@redbeard.demon.co.uk> wrote: : Commandline tools fly, we place and route on a duel hyperthreded xeon (4 : logical cpus) and setting 4 designs off in parallel gives impressive : performace, made the time spent building large Makefiles worth while. I'd appreciated if you would post a simple command file. Thanks -- Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------Article: 70519
"Matt Dykes" <mattdykes@earthlink.net> wrote in message news:8a89c2c3.0406171251.42e240ca@posting.google.com... > XST keeps removing some of my input pins. I have used the LOC > attribute in both the VHDL itself and in the .ucf file to no avail. > After PAR, the pins declared in the port declaration (and LOCed to > specific pins) have not been routed to a pad. > > The inputs ARE used by latching into a register that is later read. > Any thoughts on why XST is removing them and how to make it stop? > Thx-Matt Hi, Have a look at the SAVE NET FLAG constraint in the cgd.pdf documentation. Regards, Alvin.Article: 70520
Peter, here is some user code that let's you access the DOCM (8KB at address 0x40000000): --- // mmaptest.c // gcc -o mmaptest mmaptest.c // su -c "mmaptest" #include <stdio.h> #include <unistd.h> #include <sys/mman.h> #include <fcntl.h> #include <assert.h> int main(void) { int fd; unsigned char *buf; int i; fd=open("/dev/mem", O_RDWR); printf("fd: %d\n", fd); buf = mmap (0x40000000, 8*1024, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0x40000000); printf("buf: %x\n", buf); assert(buf == 0x40000000); // virtual address must match physical address for (i=0; i < 8; i++) printf("buf[%d] = %x\n", i, buf[i]); return 0; } --- Please note that the OCM is virtually addressed and virtually accessed, i.e. you must make sure that the virtual address returned by mmap() is exactly the same as the physical address. For that, ioremap() will not work as the virtual address and the physical address typically are different. Further, you have to enable the work-around for the OCM errata, i.e. set some undocumented bit(s) in the CCR0 register. The PPC405 errata sheet available from xilinx.com contains information which bit(s) these are (see solution record 14052). Next, the program must be run with enough rights to access /dev/mem, i.e. run it as root. For a final implementation you will typically have to write a device driver that accesses the OCM and abstracts these accesses from the user code. - Peter Peter Monta wrote: > I'm having trouble accessing OCM peripherals from Linux. > The symptom is a machine check exception whenever the > OCM address space is read or written. > > This code is run from a kernel module: > > unsigned int* p = (unsigned int*)ioremap(0x40000000,0x1000); > printk("%08x\n",p[0]); > > When the OCM address of 0x40000000 is replaced with, say, > the UART at 0xA0000000, things work as expected: can peek > and poke the UART registers with no problem. How is > the OCM different from a PLB peripheral from Linux's > point of view? The cacheability registers, DCCR and ICCR, > are correctly set by Linux to 0xf0000000, so only addresses > under 512 MByte are cacheable (currently occupied by DRAM), > and OCM is well outside this range. (We use only DSOCM.) > > If the ioremap() address is replaced with something > nonexistent, such as 0x50000000, then the same type of > machine exception occurs. It's as if the OCM didn't exist. > > But running in real mode works fine: a small test application > running from PLB block RAM can see all of the OCM resources. > > This is a 2VP20, processor version number 0x200108a0. > > I plan to open a Xilinx hotline case, but thought I'd ask > in the newsgroup for good measure. > > Cheers, > Peter Monta > RGB Networks, Inc.Article: 70521
ngc2edif.exe is for simulation only and is the same as the xst edifngc paramter. ISE 4 was the last version to include the ability to compile to EDIF. The good thing is that you don't need it. Do this: put the ngc file in the same directory you're running NGDBuild on your EDIF file. Include in your EDIF file the cell interface declaration only. (You can cut this from the file ngc2edif generated.) If you have an EDIF cell interface declaration that is not a primitive, Xilinx will look for a file of the same name in a compiled format in the current directory and automatically merge that in. "Michael Rhotert" <mrhotert@yahoo.com> wrote in message news:cau9qq$vpq$05$1@news.t-online.com... > > <sanpab@eis.uva.es> wrote > > Hi all, > > > > I have installed ISE Foundation (6.2.03) and tried to generate an > > EDIF file from a Verilog file. I remember I could do it a few years > > ago with 4.x version (not sure). > > > > I need to install something special? > > > > Anyway, what happens if the design has parameters? The EDIF file > > allow it? > > > > Thanks in advance, Santiago. > > Hi, > synthesize your design with xst and then use ngc2edif.exe from command line > to translate the *.ngc netlist to EDIF format. > > Another option is to run the 'translate' (ngdbuild) step after synthesis and > then use ngd2edif.exe from command line. > > /Michael > >Article: 70522
In XST use the -iobuf NO parameter to turn off the automatic buffer insertion. "Jake Janovetz" <jakespambox@yahoo.com> wrote in message news:d6ad3144.0406171545.68f3e376@posting.google.com... > (I'm not sure why, but Google apparently loses about 50% of my posts, > so I'll try this again) > > I have a few modules that I would provide to customers. They are all > quite simple, but by not providing Veriog/VHDL I shelter them from the > implementation details and possible warnings that would come from > synthesis. So, I'd prefer to provide library "objects" in NGC format. > > Most of the modules are 'internal' (not requiring IOBs), but one needs > to map to IO pins, including an 8-bit bidirectional bus. If I -don't- > include IOBs in the module, the parent design synthesizes OBUFs for > the bidir bus and completely ignores the inputs. If I manually map > the OBUFTs within the module, I get complaints during parent synthesis > because apparently the parent is adding OBUFs which compete with the > OBUFTs in the module. > > I'd prefer a solution which requires as little 'extra' work on the > parent side of things, but would appreciate any suggestions. > > Cheers, > JakeArticle: 70523
hello, correctly implemented the bus macro using the BUFT, i just have two questions: - the number of bus macros in virtex II per one column is 112, but i need more than 112 for one bus, is it possible to expand this bus to include more than 2 columns, i mean if i need 200 BUFTs to provide the communication for 200 signals, i can use 4 columns, using 200 of the 224 BUFTs, - the second question is that, if i understand it correctly, i have to use 2 BUFTs for each signal in the bus macro, one in each module, i mean one to the right and one to the left, is this true? is there any place that i can register so that i can post messages? thanks in advance, i appreciate your help very much,Article: 70524
Hello all, I am trying to program FPGA on Xilinix virtex 2 pro board. The only hardware I got other than this board is windriver vision probe2. Though it is mentioned in the feature list of visionprobe 2 (XE) that it can do "Flash Programming Code & FPGA Download", I am not able to find out how to do it. This vision probe hardware can be connected to CPU JTAG, CPU DEBUG and CPU TRACE port of the board but I am not sure which port I should use for programming the FPGA (if it is possible to program). I guess It should be CPU JTAG which is part of JTAG chain (other ports connected to JTAG chain are JTAG PORT, PARALLEL IV and SYSTEM ACE connector). The jtag chain is connected to Xilinx PROMs (which is connected to FPGA). If I can use CPU JTAG port for FPGA programming, what software I should be using on my host for this. Anybody tried this earlier. Do I need some other hardware/cables ? Thanks rajendra
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z