Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi experts, In a new board design I have to build a generic IO PORT. This generic IO port will be connected to an connector (20-pin header). The output drive voltage will be LVCMOS18 for some applications and LVTTL for some others applications. My question is : Did I need to specifiy LVCMOS18 and LVTTL in the .ucf file and create two .bit files (one for LVCMOS18 and one for LVTTL) ? OR could I only create one .bit and work with the VCCO only ? Same questions for Coolrunner-II Thanks Laurent Gauch www.amontec.comArticle: 70476
Another question: Referring to http://www.altera.com/literature/lit-sop.jsp where the PWM is used as an example for a Verilog Blackbox, The lowest-level of the C headers contains the following: #ifndef __ALTERA_AVALON_PWM00_H #define __ALTERA_AVALON_PWM00_H #define ALTERA_AVALON_PWM_CLOCK_DIVIDER(base_addr) (ALTERA_AVALON_PWM_TYPE (base_addr + 0x0 )) #define ALTERA_AVALON_PWM_CLOCK_DIVIDER_MSK (0xFFFFFFFF) #define ALTERA_AVALON_PWM_CLOCK_DIVIDER_OFST (0) #define ALTERA_AVALON_PWM_DUTY_CYCLE(base_addr) (ALTERA_AVALON_PWM_TYPE (base_addr + 0x4 )) #define ALTERA_AVALON_PWM_DUTY_CYCLE_MSK (0xFFFFFFFF) #define ALTERA_AVALON_PWM_DUTY_CYCLE_OFST (0) #define ALTERA_AVALON_PWM_ENABLE(base_addr) (ALTERA_AVALON_PWM_TYPE (base_addr + 0x8 )) #define ALTERA_AVALON_PWM_ENABLE_MSK (0x1) #define ALTERA_AVALON_PWM_ENABLE_OFST (0) #endif /* __ALTERA_AVALON_PWM00_H */ What exactly does ALTERA_AVALON_PWM_TYPE do? What does ALTERA_AVALON_PWM_ENABLE_MSK and ALTERA_AVALON_PWM_ENABLE_OFST mean? If the former is a subroutine, I can't seem to find it in any of the files. In the exacalibur.h file, there is a line: #define na_user_logic_altera_avalon_pwm_0 ((void *) 0x00020020) // user_logic_altera_avalon_pwm Since the base address is always pointing to void, then can I say that ALTERA_AVALON_PWM_CLOCK_DIVIDER always has the input 0? What exactly is np_usersocket and why is it set to point to void?Article: 70477
Another question: Referring to http://www.altera.com/literature/lit-sop.jsp where the PWM is used as an example for a Verilog Blackbox, The lowest-level of the C headers contains the following: #ifndef __ALTERA_AVALON_PWM00_H #define __ALTERA_AVALON_PWM00_H #define ALTERA_AVALON_PWM_CLOCK_DIVIDER(base_addr) (ALTERA_AVALON_PWM_TYPE (base_addr + 0x0 )) #define ALTERA_AVALON_PWM_CLOCK_DIVIDER_MSK (0xFFFFFFFF) #define ALTERA_AVALON_PWM_CLOCK_DIVIDER_OFST (0) #define ALTERA_AVALON_PWM_DUTY_CYCLE(base_addr) (ALTERA_AVALON_PWM_TYPE (base_addr + 0x4 )) #define ALTERA_AVALON_PWM_DUTY_CYCLE_MSK (0xFFFFFFFF) #define ALTERA_AVALON_PWM_DUTY_CYCLE_OFST (0) #define ALTERA_AVALON_PWM_ENABLE(base_addr) (ALTERA_AVALON_PWM_TYPE (base_addr + 0x8 )) #define ALTERA_AVALON_PWM_ENABLE_MSK (0x1) #define ALTERA_AVALON_PWM_ENABLE_OFST (0) #endif /* __ALTERA_AVALON_PWM00_H */ What exactly does ALTERA_AVALON_PWM_TYPE do? What does ALTERA_AVALON_PWM_ENABLE_MSK and ALTERA_AVALON_PWM_ENABLE_OFST mean? If the former is a subroutine, I can't seem to find it in any of the files. In the exacalibur.h file, there is a line: #define na_user_logic_altera_avalon_pwm_0 ((void *) 0x00020020) // user_logic_altera_avalon_pwm Since the base address is always pointing to void, then can I say that ALTERA_AVALON_PWM_CLOCK_DIVIDER always has the input 0? What exactly is np_usersocket and why is it set to point to void?Article: 70478
salman sheikh wrote: > Hello, > > I just installed Xilinx ISE 6.2i on a Linux box and it is sluggish as > anything. Does anyone know why? I am running on a P4 1.7GHz w/ 1GB of > RAM. On windows, it is much more zippy. Could it be the gui toolkit > that Xilinx is using (it seems like JAVA.......slow as a slug....)? > Oddly enough, running the Windows version of ISE under Wine/Linux is significantly more responsive than the Linux "native" version... sigh. -- My real email is akamail.com@dclark (or something like that).Article: 70479
vbishtei@hotmail.com (vadim) wrote in message news:<2a613f5d.0406170517.611ae93c@posting.google.com>... > Does anybody know how can I disable the automatic optimizer in Quartus II > to prevent it from eliminating redundant gates ? > > (I am trying to implement a delay line using a cascade of inverters, which > Quartus removes during compilation since they are logically redundant.) > > thanks, Hi Vadim, Building delay chains is not recommended. You can use LCELL buffer as shown below to force a route through two lcells. module test (in,out); input in; output out; wire lcaout; lcell lca (in,lcaout); lcell lcb (lcaout,out); endmodule Hope this helps. Subroto Datta Altera Corp.Article: 70480
Hi Laurent, Regarding CoolRunner-II, make sure that you configure the device for LVTTL or LVCMOS18 accordingly. You can either use the .ucf file to specify the constraint or use the GUI to do it. The LVTTL and LVCMOS18 settings affect the way that the CoolRunner-II IOs are configured. No real damage will occur if you set LVCMOS18 and tie Vccio to 3.3V (or vice versa). However, your rise/fall times and drive strength will get affected if you have the wrong settings... Hope that helps! Mark Amontec, Laurent Gauch wrote: > Hi experts, > > In a new board design I have to build a generic IO PORT. This generic IO > port will be connected to an connector (20-pin header). > > The output drive voltage will be LVCMOS18 for some applications and > LVTTL for some others applications. > > My question is : > Did I need to specifiy LVCMOS18 and LVTTL in the .ucf file and create > two .bit files (one for LVCMOS18 and one for LVTTL) ? > OR could I only create one .bit and work with the VCCO only ? > > Same questions for Coolrunner-II > > Thanks > Laurent Gauch > www.amontec.com >Article: 70481
Hi Mark, Many thanks for your comments. Is that the same for Spartan-IIE ? My goal is to provide an new version of my Chameleon POD but using USB2.0 and a colrunner or SPARTAN-IIE. The first objective of this new Chameleon based USB2.0 will be to build an JTAG interface accelarator. Actually, JTAG can be 3.3V or 2.5V (will be 1.8V in future), so I ask me if there are a sense to do the voltage level converter directly with the VCCO. The VCCO will be drived by the user board target. What do you think, is that acceptable? I will do some tests next week concerning rise/fall times and drive strength when configuring the FPGA with LVTTL and driving the VCCO to down 1.8V Best regards, Laurent Mark Ng wrote: > Hi Laurent, > > Regarding CoolRunner-II, make sure that you configure the device for > LVTTL or LVCMOS18 accordingly. > > You can either use the .ucf file to specify the constraint or use the > GUI to do it. > > The LVTTL and LVCMOS18 settings affect the way that the CoolRunner-II > IOs are configured. No real damage will occur if you set LVCMOS18 and > tie Vccio to 3.3V (or vice versa). However, your rise/fall times and > drive strength will get affected if you have the wrong settings... > > Hope that helps! > Mark > > > > Amontec, Laurent Gauch wrote: > >> Hi experts, >> >> In a new board design I have to build a generic IO PORT. This generic >> IO port will be connected to an connector (20-pin header). >> >> The output drive voltage will be LVCMOS18 for some applications and >> LVTTL for some others applications. >> >> My question is : >> Did I need to specifiy LVCMOS18 and LVTTL in the .ucf file and create >> two .bit files (one for LVCMOS18 and one for LVTTL) ? >> OR could I only create one .bit and work with the VCCO only ? >> >> Same questions for Coolrunner-II >> >> Thanks >> Laurent Gauch >> www.amontec.com >>Article: 70482
salman sheikh wrote: > Hello, > > I just installed Xilinx ISE 6.2i on a Linux box and it is sluggish as > anything. Does anyone know why? I am running on a P4 1.7GHz w/ 1GB of > RAM. On windows, it is much more zippy. Could it be the gui toolkit > that Xilinx is using (it seems like JAVA.......slow as a slug....)? I've been using the Windows version on Windows 2000, running under VMWare emulation, on a Mandrake Linux OS. Some people tell me it is slower than running just a native Win OS and the application, but I don't seem to notice the difference. (Just one small data point.) JonArticle: 70483
XST keeps removing some of my input pins. I have used the LOC attribute in both the VHDL itself and in the .ucf file to no avail. After PAR, the pins declared in the port declaration (and LOCed to specific pins) have not been routed to a pad. The inputs ARE used by latching into a register that is later read. Any thoughts on why XST is removing them and how to make it stop? Thx-MattArticle: 70484
Duane Clark wrote: > salman sheikh wrote: >> I just installed Xilinx ISE 6.2i on a Linux box and it is sluggish as >> anything. Does anyone know why? I am running on a P4 1.7GHz w/ 1GB of >> RAM. On windows, it is much more zippy. Could it be the gui toolkit >> that Xilinx is using (it seems like JAVA.......slow as a slug....)? > Oddly enough, running the Windows version of ISE under Wine/Linux is > significantly more responsive than the Linux "native" version... sigh. I've noticed that things run faster under wine also, the native versions seem to have some horrid windowsesq gui toolkit that spends rather a lot of time doing DNS lookups for every window/widget it needs to draw.. I notice that the Java based Coregen is much more responsive that the rest of the system. Commandline tools fly, we place and route on a duel hyperthreded xeon (4 logical cpus) and setting 4 designs off in parallel gives impressive performace, made the time spent building large Makefiles worth while.Article: 70485
Forgive me if this has been asked before, but does anybody have comments or links to simple methods of compressing/decompressing Xilinx configuration bitstreams? I've been perusing a few of my .rbt files, and they have long bunches of 1s and 0s (interestingly, different designs seem to have more 1s, others mostly 0s.) I'd think that something very simple might achieve pretty serious (as, maybe 2:1-ish) compression without a lot of runtime complexity. We generally run a uP from EPROM, with the uP code and the packed Xilinx config stuff in the same eprom, with the uP bit-banging the Xilinx FPGA at powerup time. So a simple decompressor would be nice. I did google for this... haven't found much. Thanks, JohnArticle: 70486
Hello Vadim, In VHDL, enter the following: signal delay_gate : std_logic; attribute keep : boolean; attribute keep of lval_rowdel_delaln : signal is true; Similar construct exists for Verilog. This turns delay_gate into an LCELL of whatever is feeding it. Quartus doesn't remove LCELLs, unless Remove Redundant Logic Cells is set in your design, which by default it is off - check your .qsf or under Assignments/Settings/Analysis&Synth/More Settings. If you want to keep a redundant register, set REMOVE_DUPLICATE_REGISTERS to OFF for that register. HTH, -- Pete vbishtei@hotmail.com (vadim) wrote in message news:<2a613f5d.0406170517.611ae93c@posting.google.com>... > Does anybody know how can I disable the automatic optimizer in Quartus II > to prevent it from eliminating redundant gates ? > > (I am trying to implement a delay line using a cascade of inverters, which > Quartus removes during compilation since they are logically redundant.) > > thanks,Article: 70487
Oops cut and paste error ... my other msg should read: attribute keep of delay_gate : signal is true; -- Pete vbishtei@hotmail.com (vadim) wrote in message news:<2a613f5d.0406170517.611ae93c@posting.google.com>... > Does anybody know how can I disable the automatic optimizer in Quartus II > to prevent it from eliminating redundant gates ? > > (I am trying to implement a delay line using a cascade of inverters, which > Quartus removes during compilation since they are logically redundant.) > > thanks,Article: 70488
John Larkin wrote: > Forgive me if this has been asked before, but does anybody have > comments or links to simple methods of compressing/decompressing > Xilinx configuration bitstreams? I've been perusing a few of my .rbt > files, and they have long bunches of 1s and 0s (interestingly, > different designs seem to have more 1s, others mostly 0s.) I'd think > that something very simple might achieve pretty serious (as, maybe > 2:1-ish) compression without a lot of runtime complexity. We generally > run a uP from EPROM, with the uP code and the packed Xilinx config > stuff in the same eprom, with the uP bit-banging the Xilinx FPGA at > powerup time. So a simple decompressor would be nice. > > I did google for this... haven't found much. There was discussion on this some months ago. Might show up here ? http://www.fpga-faq.com/ You can run std ZIP tools on the files, to get a quick 'practical limit' indication. We did some work with Run length compression, which is very simple (simple enough to code into CPLD), but has medium compression gains. ISTR about half the gains of ZIP ? It could be improved with a RLC-Compiler/optimiser that looked for the best pattern/lengths for that chip, or even bitstream, as you could store the RLC params as a 'header', but we did not go that far. Be a good project for Xilinx to do as an app note :) It would make sense to target to the pattern-repeat sizes on devices like FPGAs. -jgArticle: 70489
The bit generation tool has an option to compress the .bit file. I use this when I'm loading over JTAG to save time. I assume Xilinx has info on in system programming with a compressed .bit file. However, I've observed the same phenomenon as you: when I zip a .bit file it is usually less than 50% of the original size. My guess is even a trivial run length encoding compression would be helpful. There are plenty of resources for Lempel Ziv compression on the web: see http://www.dogma.net/markn/articles/lzw/lzw.htm If you get it working please post/send the result. "John Larkin" <jjlarkin@highSNIPlandTHIStechPLEASEnology.com> wrote in message news:hh34d0tud78se5vqejirkc2bvufbd8io3d@4ax.com... > > Forgive me if this has been asked before, but does anybody have > comments or links to simple methods of compressing/decompressing > Xilinx configuration bitstreams? I've been perusing a few of my .rbt > files, and they have long bunches of 1s and 0s (interestingly, > different designs seem to have more 1s, others mostly 0s.) I'd think > that something very simple might achieve pretty serious (as, maybe > 2:1-ish) compression without a lot of runtime complexity. We generally > run a uP from EPROM, with the uP code and the packed Xilinx config > stuff in the same eprom, with the uP bit-banging the Xilinx FPGA at > powerup time. So a simple decompressor would be nice. > > I did google for this... haven't found much. > > Thanks, > > John >Article: 70490
John, I think that I had heard that zipping, and unzipping bit files led to the most compression (2:1 or better). (classic unix or windows zip/unzip) I think that a zip/unzip routine would be a great example of something a uP could do without an unreasonable amount of memory (ROM+RAM) support. Austin John Larkin wrote: > Forgive me if this has been asked before, but does anybody have > comments or links to simple methods of compressing/decompressing > Xilinx configuration bitstreams? I've been perusing a few of my .rbt > files, and they have long bunches of 1s and 0s (interestingly, > different designs seem to have more 1s, others mostly 0s.) I'd think > that something very simple might achieve pretty serious (as, maybe > 2:1-ish) compression without a lot of runtime complexity. We generally > run a uP from EPROM, with the uP code and the packed Xilinx config > stuff in the same eprom, with the uP bit-banging the Xilinx FPGA at > powerup time. So a simple decompressor would be nice. > > I did google for this... haven't found much. > > Thanks, > > John >Article: 70491
First, please be aware that the ACSII .rbt file is 8x the simple .bin file size. Check the bitgen options and you'll find the ability to generate the straight binary file - 1s and 0s at the bit level, not the ASCII character level. Compression beyond that may be what you're looking for, but please - start with the binary file. "John Larkin" <jjlarkin@highSNIPlandTHIStechPLEASEnology.com> wrote in message news:hh34d0tud78se5vqejirkc2bvufbd8io3d@4ax.com... > > Forgive me if this has been asked before, but does anybody have > comments or links to simple methods of compressing/decompressing > Xilinx configuration bitstreams? I've been perusing a few of my .rbt > files, and they have long bunches of 1s and 0s (interestingly, > different designs seem to have more 1s, others mostly 0s.) I'd think > that something very simple might achieve pretty serious (as, maybe > 2:1-ish) compression without a lot of runtime complexity. We generally > run a uP from EPROM, with the uP code and the packed Xilinx config > stuff in the same eprom, with the uP bit-banging the Xilinx FPGA at > powerup time. So a simple decompressor would be nice. > > I did google for this... haven't found much. > > Thanks, > > John >Article: 70492
Austin Lesea wrote: > John, > > I think that I had heard that zipping, and unzipping bit files led to > the most compression (2:1 or better). (classic unix or windows zip/unzip) Yes, but a compress targeted to FPGA content should be more efficent, and use less resource than a generic compress. > I think that a zip/unzip routine would be a great example of something a > uP could do without an unreasonable amount of memory (ROM+RAM) support. One engineer's reasonable is anothers excessive :) There are two main classes of uC loader: * Ones that store the compressed stream on-chip, and so can expect to have good random access, for things like decompress tables. Large-code uC also tend to have larger RAM * Ones that store the compressed stream in low-cost serial flash. In this class, table handling is not as easy. uC used here could be as miniscule as the PIC10F in SOT23 PIC10F starts at 16 bytes RAM and 256 Words Code space... CPLDs are also used for loaders, and they can do simple decompress. =jgArticle: 70493
"John Larkin" <jjlarkin@highSNIPlandTHIStechPLEASEnology.com> wrote in message news:hh34d0tud78se5vqejirkc2bvufbd8io3d@4ax.com... > > Forgive me if this has been asked before, but does anybody have > comments or links to simple methods of compressing/decompressing > Xilinx configuration bitstreams? I've been perusing a few of my .rbt > files, and they have long bunches of 1s and 0s (interestingly, > different designs seem to have more 1s, others mostly 0s.) I'd think > that something very simple might achieve pretty serious (as, maybe > 2:1-ish) compression without a lot of runtime complexity. We generally > run a uP from EPROM, with the uP code and the packed Xilinx config > stuff in the same eprom, with the uP bit-banging the Xilinx FPGA at > powerup time. So a simple decompressor would be nice. > > I did google for this... haven't found much. > > Thanks, > > John VCC did a package called HOTMan that does compression. It takes the bit file and turns it into a compressed file that looks like... int testArray[2669]=\ { 0xddedda78,0xe55c8c5f,0xefe1c079.... } We get at least 4 to 1 and small designs in big chip can get 50 to 1. The above format allows you to compile the design into a C/C++ program. SteveArticle: 70494
John Larkin wrote: > Forgive me if this has been asked before, but does anybody have > comments or links to simple methods of compressing/decompressing > Xilinx configuration bitstreams? I've been perusing a few of my .rbt > files, and they have long bunches of 1s and 0s (interestingly, > different designs seem to have more 1s, others mostly 0s.) I'd think > that something very simple might achieve pretty serious (as, maybe > 2:1-ish) compression without a lot of runtime complexity. We generally > run a uP from EPROM, with the uP code and the packed Xilinx config > stuff in the same eprom, with the uP bit-banging the Xilinx FPGA at > powerup time. So a simple decompressor would be nice. > > I did google for this... haven't found much. > > Thanks, > > John > No links, but have you considered simple run-length limiting? I can think of at least one scheme that would be guaranteed sub-optimal from a compression standpoint but that wouldn't take much code -- just encode any string of 0xff or 0x00 bytes as that byte followed by a count -- so that 0x00 0x00 0x00 0x00 becomes 0x00 0x04, for instance. You have the overhead that 0x00 becomes 0x00 0x01, and you also can't encode anything that spans bytes -- but you may be happy with it none the less. -- Tim Wescott Wescott Design Services http://www.wescottdesign.comArticle: 70495
On Thu, 17 Jun 2004 22:14:15 GMT, "John_H" <johnhandwork@mail.com> wrote: >First, please be aware that the ACSII .rbt file is 8x the simple .bin file >size. Check the bitgen options and you'll find the ability to generate the >straight binary file - 1s and 0s at the bit level, not the ASCII character >level. Compression beyond that may be what you're looking for, but please - >start with the binary file. > Of course. We have a little utility, vaguely like a linker, that gobbles up Motorola .s28 files and Xilinx .rbt files and builds a rom image, all properly squashed into bits. It's cute... it even saves the beginning of the rbt ASCII header in the rom image for FPGA version verification. My observation was that the bits themselves include long runs of 1s or 0s. I'd like to design a board using a 28-pin eprom (space is at a premium here) but plan hooks for using a bigger Xilinx chip some day, and then I'd run out of rom space to store the config bits. So having a compression scheme would give us the margin to use the small eprom. Suppose the compressed data were an array of bytes. If the MS bit of a byte were 0, the remaining 7 bits are to be loaded verbatum; if the MS bit is a 1, the other 7 bits specify a run of up to 63 1's or 0's. Something like that; the exact numbers may need tuning. Very easy to unpack, not hard to encode. I'd have to test some actual config files to see how good something like this could compress. JohnArticle: 70496
On Thu, 17 Jun 2004 14:41:01 -0700, John Larkin <jjlarkin@highSNIPlandTHIStechPLEASEnology.com> wrote: > >Forgive me if this has been asked before, but does anybody have >comments or links to simple methods of compressing/decompressing >Xilinx configuration bitstreams? I've been perusing a few of my .rbt >files, and they have long bunches of 1s and 0s (interestingly, >different designs seem to have more 1s, others mostly 0s.) I'd think >that something very simple might achieve pretty serious (as, maybe >2:1-ish) compression without a lot of runtime complexity. We generally >run a uP from EPROM, with the uP code and the packed Xilinx config >stuff in the same eprom, with the uP bit-banging the Xilinx FPGA at >powerup time. So a simple decompressor would be nice. > >I did google for this... haven't found much. > >Thanks, > >John See: www.ee.washington.edu/people/faculty/hauck/publications/runlength.PDF www.ee.washington.edu/people/faculty/hauck/publications/runlengthTR.PDF www.ee.washington.edu/people/faculty/hauck/publications/runlengthJ.pdf It should be straightforward to generate some RLL compression and decompression code. You might want to test the algorithms on a PC to make sure that the decompressed output ends up the same as the uncompressed input. A garbled bitstream can have the same effect as the MC6800 HCF opcode... ================================ Greg Neff VP Engineering *Microsym* Computers Inc. greg@guesswhichwordgoeshere.comArticle: 70497
"Allan Herriman" <allan.herriman.hates.spam@ctam.com.au.invalid> wrote in message news:drb3d09tn390n8g3m7a0fdebqqapvc20r9@4ax.com... [snip] > Some bugs in kcpsm2.exe are also present in kcpsm3.exe (see webcases > #533179 and #533195). It appears that web case #533195 relates to the DOS 8.3 filename requirements for the KCPSM3 assembler. This is an unfortunate side effect of maintaining compatibility with the widest number of development computers and with the original choice of programming languages to create the KCPSM3 assember. One alternative is to use the more modern Mediatronix pBlazIDE graphical development environment, which is also no charge. The instruction nmemonics are slightly different than the KCPSM3 assembler but the pBlazIDE software has a code import function that reads KCPSM3 code. Furthermore, the pBlazIDE software includes an instruction-set simulator. http://www.mediatronix.com/pBlazeIDE.htm > Kcpsm3.exe can't compile code that compiles with kcpsm2.exe. (This > violates one of the cardinal rules of EDA - don't break existing > designs.) > The problem seems to be that kcpsm3 doesn't accept register names of > the form 's00', which is the only form that kcpsm2 accepts. There are some differences between the PicoBlaze controller for the Spartan-3, Virtex-II, and Virtex-II Pro FPGA families and the older version available for just Virtex-II and Virtex-II Pro. The latest incarnation includes two new instructions (COMPARE, TEST) plus a 64-byte scratchpad RAM, although the number of registers dropped back to 16 from 32. You can use the NAMEREG assembler directive to handle the different register assignment. Here's an example. Just add the following code to the start of your NAMEREG s0, s00 ; alias old register name s00 to s0 NAMEREG s1, s01 ; alias old register name s01 to s1 ... NAMEREG sF, s0F ; alias old register name s0F to sF Then the remainder of your code should compile using the old names. However, you still need to adjust the register assignments for the upper 16 registers. --------------------------------- Steven K. Knapp Applications Manager, Xilinx Inc. General Products Division Spartan-3/II/IIE FPGAs http://www.xilinx.com/spartan3 --------------------------------- Spartan-3: Make it Your ASICArticle: 70498
(I'm not sure why, but Google apparently loses about 50% of my posts, so I'll try this again) I have a few modules that I would provide to customers. They are all quite simple, but by not providing Veriog/VHDL I shelter them from the implementation details and possible warnings that would come from synthesis. So, I'd prefer to provide library "objects" in NGC format. Most of the modules are 'internal' (not requiring IOBs), but one needs to map to IO pins, including an 8-bit bidirectional bus. If I -don't- include IOBs in the module, the parent design synthesizes OBUFs for the bidir bus and completely ignores the inputs. If I manually map the OBUFTs within the module, I get complaints during parent synthesis because apparently the parent is adding OBUFs which compete with the OBUFTs in the module. I'd prefer a solution which requires as little 'extra' work on the parent side of things, but would appreciate any suggestions. Cheers, JakeArticle: 70499
Mike I started off using your code, it looks more like what I am used to using. I am not clear on the feeding back of the output to the input. This is done internal to the PAL? I fixed an upper/lower case problem, the enable line was uppercase and pins 6,7,8 were lower. Now I am getting a 0014CB error, "The variable used as an input was previously assigned to an output that is neither bidirectional nor feeds back into the input array." My device is g16v8a, when I do a device independant compile it seems happier, I guess because it doesn't know the pins (input vrs output) of my device independant device. Here is the code I am compiling now......... pin[2..5] = [D0..3]; pin[6,7,8] = [!ce,!oe,!wr]; pin[12..15] = [Q0..3]; field din=[D0..3]; field dout=[Q0..3]; enable = !ce & oe & !wr; dout = !enable & dout /* hold latch contents by feeding output back to input */ # enable & din; WHAT I THINK I UNDERSTAND......... So, we are first saying pins 2, 3, 4, 5 will be known as D0, D1, D2, & D3 next we say pins 12, 13, 15, & 15 will be outputs Q0, Q1, Q2, & Q3. next, pin 6 is chip enable active low, pin 7, is output enable active low, and pin 8 is assigned write not active low. we say DIN is the data inputs D0-D3 and DOUT is Q0-Q3. When we are active CE is low, OE is high, and WR is low. NOW FOR THE FUN PART........ DOUT is equal to when not enabled ANDed with DOUT ORed with enabled and the data in? NOT SURE I GET THAT ONE....... Thanks all again for your support, Paul K. accrg@accrepairs.com (Paul K) wrote in message news:<a5ad9202.0406151436.22f4fdfb@posting.google.com>... > Thanks Mike and Jim for the replies, I will go off and digest this and > see what happens. This is also the first time I have used the groups, > thank you very much for the replies. > > Paul > > > Mike Harrison <mike@whitewing.co.uk> wrote in message news:<t7ftc0l7fj5fqu89qpgpri2btgfntnciis@4ax.com>... > > On 14 Jun 2004 18:38:54 -0700, accrg@accrepairs.com (Paul K) wrote: > > > > >I am very new to PAL programming. I have created a few to decode > > >addresses. I have been using the ATMEL 16V8 PAL and WINCUPL. > > > > > >I now need to latch data appearing on 3 inputs when a certain > > >condition is met on 3 other inputs. > > > > > >I need to latch the data on a cpu data buss D0, D1, & D2 when the > > >signal write (WR\) is low, and the signal chip enable (CE\) is low and > > >the signal output enable (OE\)is high, then latch the data on D0-D3. > > > > > >I currently have the circuit working with a 74LS02 (NOR)with the > > >inputs tied to WR\ and OE\, the output of the NOR goes to a 74LS08 > > >(AND)the other input is tied to OE\. The output of the AND gate feeds > > >a 73LS273 latch. I will be latching on the falling edge of WR\ > > > > > >I don't have room for the 3 TTL chips so I am trying to move it to the > > >16V8. > > > > > >I am not sure how to approach the latch, any help or push in the right > > >direction would be greatly appreciated. I have had a hard time > > >finding examples using the latching feature. > > > > > >Thanks, Paul > > > > My CUPL is a little rusty so excuse any punctuation errors... > > > > pin[2..5] = [d0..3]; > > pin[12..15] = [q0..3]; > > pin[6,7,8] = [!ce,!oe,!wr]; > > > > field din=[d0..3]; > > field dout=[q0..3]; > > > > enable = !CE & OE & !WR; > > > > dout = !enable & dout /* hold latch contents by feeding output back to input */ > > # enable & din;
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z