Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Unfortunately, you're going to have to do some work. Either you have to initialize the contents of a ram (bram or lut ram) with your data, which can be done through CoreGen after you massage the txt file into an acceptable format, or you can create a microprocessor-based system using the EDK and load your txt file into a program or into the Xilinx filesystem. I am curious about your application, though. How are you using the text file?Article: 90451
Slurp wrote: >"Enver" <ecavus@gmail.com> wrote in message >news:1129061312.313084.204610@g47g2000cwa.googlegroups.com... > > >>Hi, >> >>I need to implement an 8x8 circular shifter module where I have 8 input >>ports (each 4 bits) that needs to be connected to 8 output ports (each >>4 bits) shifted by shift times. I have an input port that tells the >>shift value. >> >>e.g: >>switch_block: for k in 0 to 7 generate >>s<= (k+shift) when (k+shift)< 7 else (k+shift)-7; >>switch_out(k+shift)<=switch_in(k); >>end generate switch_block; >> >>is there anyone who can tell me what is the best way of implementing >>this in FPGA. The obvious way is a shift register but in my design I >>need to perform this operation in one clk cycle. I will appreciate any >>suggestions, thoughts, or comments. >> >>Thanks, >> >> >> > >Use 8 X (8 channel 4 bit muxes) and drive the 3 bit addresses via a set of >adders each offset by one, ignoring overflows. > >By inputting the number of shifts to the front end add, the correct shifts >are applied to the remaining. > >Assuming your clock rate is <200MHz you should piss it in 1 clock cycle. > >Slurp > > > > you could use muxes or multipliers as suggested, however a lower cost solution is to use a set of layered 2:1 muxes. You'd have 4 instances, one for each bit, so let's just focus on one instance. The one bit instance consists of 3 layers of 2:1 muxes. The first layer either passes the data in the same order or rotates it by 4 positions, and is controlled by the msb of your shift control. The second layer either passes the output of the first layer unchanged or rotates it by 2 positions. The final layer passes the second layer output unchanged or rotates it by 1 position. The result takes 24 2:1 muxes per bit arranged in 3 layers of logic. If you can afford the pipeline latency, you can register each 2:1 mux for a 3 clock latency, and this will run at close to the max toggle rate of the FPGA. If you did it with 8:1 muxes, which each consist of 7 2:1 muxes, you'd need 8 8:1 muxes, or a total of 56 2:1 muxes per bit. Using the dedicated F5 and F6 muxes in Virtex would reduce the size of the 8:1 mux, but at the cost of increased routing complexity and higher fan-out loading on the circuit driving the shifter. -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 90452
Ray Andraka wrote: > Time has come for a computer upgrade (I'm currently using a dual > Athlon 1.8GHz with 4GB memory and 15000RPM scsi raid array running > Win2K, matrox dual head video with a pair of 19" monitors), but I > haven't kept up at all with the computer market. I'm wondering what > people are using these days for high end designs (for simulation and > PAR especially). I'd like something that doesn't sound like a vacuum > cleaner and heat the room like a space heater this time around too, > perhaps I need to consider liquid cooling? ANy comments would be > appreciated. Yes, I admit I am being lazy. > I finally got around to specifying the system. I settled on a system put together by <A HREF="http://secure.hypersonic-pc.com> hypersonic PC</A> Cyclone system with an Athlon 64 X2 4800+, 2GB RAM, Raid 3 with 250 GB SATA drives. I substituted and Nvidia Quadro FX540 for the video (I didn't need the high end gaming video). It has liquid cooling for the processor and acoustic insulation, so hopefully I won't suffer any more hearing loss. Should have it in my hands by the end of the month. -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 90453
Well, veterans of the TTL days (and I am one of them) can exchange horror stories about weird metastable behavior, but the modern CMOS flip-flops that I have tested repeatedly during the past 17 years behave much better. See XAPP094. The problem with metastability is not the unpredictable 0 or 1 outcome (who cares, either result is as justified as the other) or the strange level ( does not exist in buffered CMOS, or if it existed it could easily be fixed by biasing the buffer). The real, and unsolvable, problem is that metastability will cause a logic-level change at an uncontrollable moment. Most of the time just delayed by picoseconds or one or a few ns. But, very rarely, it can be more nanoseconds. Novices, with their eyes glued to a simulator, must be told that X just stands for "I do not know enough to tell you what it is", and not for "strange electrical level". Peter Alfke, Xilinx ApplicationsArticle: 90454
i'm using the V4SX55 - FF1148 and the banks that i want to place IDELAYCTR's are bank 6,1 and 5... so idelayctrl_loc: INST "DDR2_interface/i_DDR/DelayCtrl_0_b_U" LOC = "IDELAYCTRL_X0Y6" ; INST "DDR2_interface/i_DDR/DelayCtrl_1_b_U" LOC = "IDELAYCTRL_X0Y7" ; INST "DDR2_interface/i_DDR/DelayCtrl_2_b_U" LOC = "IDELAYCTRL_X1Y4" ; INST "DDR2_interface/i_DDR/DelayCtrl_3_b_U" LOC = "IDELAYCTRL_X1Y5" ; INST "DDR2_interface/i_DDR/DelayCtrl_4_b_U" LOC = "IDELAYCTRL_X2Y6" ; INST "DDR2_interface/i_DDR/DelayCtrl_5_b_U" LOC = "IDELAYCTRL_X2Y7" ; kind regards, Tim "pipjockey" <pipjockey@yahoo.com> wrote in message news:1129202988.430625.210990@g47g2000cwa.googlegroups.com... > Tim, > What part and package are you attempting to use? > It has been my observation that when xilinx has put multiple V4 parts > into a common package, the parts with fewer IOs will have their NC pins > located in the middle of the left and right sides. Perhaps this is > the case but without more specifics, I don't think anyone will be able > to assist you. > > Pip > > www.oledatech.com >Article: 90455
I am using Xilinx EDK 7.1.2 Build EDK_H.12.5.1+0, targeting the Microblaze. If I define a pure virtual function in a C++ class and declare an object of a derived class in which that function is overridden, linker errors result for "realloc" if Library/OS parameters STDIN and STDOUT are defined as "None", and additionally for "outbyte" and "inbyte" if they are defined as "RS232". A transcript for the latter case follows: Command xbash -q -c "cd /cygdrive/c/Test/XPS/; /usr/bin/make -f system.make program; exit;" Started... mb-gcc -O0 C:/Test/src/main.cpp -o Test/executable.elf \ -mno-xl-soft-mul -g -I./microblaze_0/include/ -IC:/Test/src/ -L./microblaze_0/lib/ \ -xl-mode-executable \ -D__cplusplus -Weffc++ -DNDEBUG -lstdc++ /cygdrive/c/EDK/gnu/microblaze/nt/bin/../lib/gcc/microblaze/3.4.1/../../../../microblaze/lib/libstdc++.a(cp-demangle.o): In function `d_print_resize': /cygdrive/y/gnu_builds/halite/env/Jobs/MDT/sw/nt/gnu1/bld_mb_gcc/microblaze/libstdc++-v3/libsupc++/cp-demangle.c(.text+0x23b0): undefined reference to `realloc' ./microblaze_0/lib//libc.a(write.o): In function `write': write.o(.text+0x2c): undefined reference to `outbyte' write.o(.text+0x58): undefined reference to `outbyte' write.o(.text+0x68): undefined reference to `outbyte' ./microblaze_0/lib//libc.a(read.o): In function `read': read.o(.text+0x3c): undefined reference to `inbyte' collect2: ld returned 1 exit status make: *** [Test/executable.elf] Error 1 Done. If I supply null routines simply to satisfy the linker, I notice an increase of 60K in code size. The virtual function definition is: virtual void Get_String(char string[]) = 0; If I simply change the above line to "virtual void Get_String(char string[]) {};", no linker errors result and the program runs as expected. If I write code in which a (C++) exception is thrown and handled, the same linker errors occur as listed above. The only workaround here is to write a lot of error-handling code. Despite the references in the error messages, my program does not call any printing routines.Article: 90456
I have discover that disabling the sequential optimizations in Synplify corrects the problem. So I am still supossing that the problem is due to the use of Synplify with Virtex4. Regards Javier On Thu, 13 Oct 2005 14:44:49 +0100, Aurelian Lazarut <aurash@xilinx.com> wrote: >Hi Javier, >Javier Castillo wrote: > >>I found the problem. >> >>I was using Synplify 8.1 to implement the design and it seems that it >>has some problem using RAMB16 primitives. Using XST the design works >>fine. >>The same design using Synplify and Virtex2 also works. So I suposse >>there are a problem with my design and the Virtex4 mapper. >> >> >very likely simplify doesn't propagate the bram attribute further (but >xst does) , what I would try is to run netgen to generate a back >annotated netlist after ngdbuild (translate) and try to see if the >atteibute is attached to the bram correctly. >I realy don't see this coming from mapper (assuming mapper is xilinx MAP >app) but I can investigate for you is you give me some files. > I think the default value for this attribute has change from V2 to V4 >and this will explain difference in behaviour when this is not set by >the synthesizer. > >Cheers, >Aurash > >>Regards >> >>Javier >> >> >>On Wed, 12 Oct 2005 21:29:02 +0200, Javier Castillo >><jcastillo@opensocdesign.com> wrote: >> >> >> >>>Hello, >>> >>> I have a design in a Virtex2 that uses RAMB16 primitives. In this >>>design I access to this memory to write and read the same address at >>>the same time. This gives a collision but over the board it has the >>>expected result, it means, it write the data and put in the output the >>>new data. >>>Porting this design to a Virtex4 I found that the behaviour of the >>>RAMB16 is different and the design doesnt work. I go around this >>>problem adding some logic to avoid collisions in the memory. But my >>>question is, is there any difference between RAMB16 primitive in >>>Virtex2 and Virtex4? >>> >>>Regards >>> >>>Javier >>> >>>Article: 90457
Thanks for the reply. The txt file I am planning to use contains some video packets, which are generated by a set of Python scripts. Writing verilog modules to generate the data in the txt file would be to start all over again, which I think is unnecessary. I was hoping that there would be some short cut way by which you can initialize the memory on the FPGA or may be define the memory on FPGA. How do people store long look up tables on FPGA? Especially if the look-up table data... let's say coefficients for something... are generated by some other program.Article: 90458
Claudio, your problem remains unsolved because you never told us what your problem is. If you want to select the best algorithm for a given FPGA family, that is an easy issue. If you want to compare different algorithms on different FPGA families and/or different manufacturers: more work, but still easy. If you are aiming for an ASIC: a totally different, and more difficult issue. Don't ask for help, when you are unwilling to tell us your parameters! Peter AlfkeArticle: 90459
Robert wrote: >Thanks for the reply. > >The txt file I am planning to use contains some video packets, which >are generated by a set of Python scripts. Writing verilog modules to >generate the data in the txt file would be to start all over again, >which I think is unnecessary. I was hoping that there would be some >short cut way by which you can initialize the memory on the FPGA or may >be define the memory on FPGA. > >How do people store long look up tables on FPGA? Especially if the >look-up table data... let's say coefficients for something... are >generated by some other program. > > > You could declare the text as a string constant in VHDL, then write a vhdl function to convert the text into the bit vector data needed to initialize the BRAM. I do a similar thing for some of the tables in my DSP design, where I generate the table and convert it to integers using Excel, then cut and paste the column from Excel into a VHDL integer array constant. The cut an paste for an integer array is easier if you put a column of commas in the column to the right of the column of integers in the excel spread sheet, and then copy both those columns to the VHDL. Likewise, you can also use matlab to generate a table of integers, writing it to a file using DLMwrite, and then opening that file with a text editor and cutting and pasting into your vhdl editor. In the case of the string, you'll have to write your own function to convert the string characters to integers if you do it in VHDL because the synthesis tools do not recognize the textio package. It isn't hard, just tedious. -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 90460
^Thanks! However, I am not much familiar with VHDL coding. Can you paste a sample verilog code? Also, when you say cutting and pasting into VHDL code, do you mean that I'll have to do this each time the data in my txt file changes? The data I generate in the text file will change depending on my inputs. I need a way so that I can quickly load/initialize the RAM with the new values from txt file. I can write the MATLAB code to generate the format required for initializing the RAM, but how do I actually initialize it? Thanks in advance. Robert.Article: 90461
Hi all, I'm trying to put together a picture of how Xilinx FPGAs evolved, from the XC2000 series up to the latest Virtex-4. Finding information on the early series is nigh impossible, however, so if anyone remembers the XC2000/3000 series and can answer *any* of the questions below, it'd be much appreciated! 1) When first introduced in 1985, which of the XC2000-series devices (2064, 2018) were actually made available? 2) When first introduced in 1987, which of the five XC3000-series devices (i.e. 3020, 3030, 3042, 3064 and 3090) were actually available? When did the 3090 finally arrive? 3) What year were each of the XC3000A, XC3000L and XC3100 families introduced in? Thanks, MikeArticle: 90462
Mike, I was a user then, so I will try (to answer). See below, Austin > 1) When first introduced in 1985, which of the XC2000-series devices > (2064, 2018) were actually made available? I saw a board with one of each on them in 1987. I know both went into production, and I also know they were declared obsolete (after a last time buy) the year I joined Xilinx (1998). I believe the parts were shipped for 15 years, total (I found a tube of 2018's in a file cabinet only a year ago...). You can still obtain small quantities over the web at various obsolete parts houses. > 2) When first introduced in 1987, which of the five XC3000-series > devices (i.e. 3020, 3030, 3042, 3064 and 3090) were actually available? > When did the 3090 finally arrive? We started with the 3030, and graduated immediately to the 3042 for many of the designs we started in 1990. Couldn't convince my boss to use FPGAs until then. I am not sure, but the original family did not include the 3090. It was added later. > 3) What year were each of the XC3000A, XC3000L and XC3100 families > introduced in? I believe the A was the first process shrink, and that happened in 1995. Never used the 3100 until much later, as I expect that was another process shrink, and the specs changed, so the part number changed, too.Article: 90463
Mike wrote: > 1) When first introduced in 1985, which of the XC2000-series devices > (2064, 2018) were actually made available? 2064 I think. See http://groups.google.com/groups?q=fpga+xilinx+2064+xact --mtArticle: 90464
In article <DfG2f.1471$vE5.1434@lakeread03>, Ray Andraka <ray@andraka.com> wrote: >I'm still mulling over replacing my aging system. Looks like a lot of >the newer workstation class processors are 64 bit processors, either P4 >or AMD 64. I've seen several notes stating that you should check to see >if your applications will run on 64 bit systems before buying one. Not >sure if this is going to be a problem. I want to buy as much performance >as I can, but can't afford to not have the non-CAE stuff work > I need to run, at a minimum: >Xilinx, Altera, Actel FPGA tools, >Synplify, Modelsim, Aldec >Matlab w/ simulink >MS office, Quickbooks >Acronis (disk imaging back up) >virus protection (don't care who's) >adobe acrobat > >Am I going to have problems using one of the 64 bit workstations for >this? > I wonder if you might be better off running Linux on a 64-bit development machine. Linux 64-bit support generally seems much better than Windows 64-bit support. Sure, you wouldn't have MS Office, but OpenOffice is a good replacement and the price is right ;-) No need for virus scanners (someone else mentioned that they don't work on 64-bit windows). Most all of the rest of the software you mention is available on Linux (except for quickbooks). Seems I also recall seeing posts here where people compare simulator speed on comparable machines running each OS and the Linux-based HDL simulator runs a bit faster. Anyone got numbers? PhilArticle: 90465
John_H wrote: > Even if your design runs very slow, the SDRAM module has high current > demands. > > Since all the VSS and VDD pins are connected, you still need to connect > all the pins on the module. Would you consider it safe to ride in a > hot-air balloon where the basket is attached by one small rope? It may [cut] Did anyone else miss the message that this is in reply to? I found it using google, but it didn't come down on my newsfeed. JeremyArticle: 90466
kha59@student.canterbury.ac.nz wrote: > Thanks Alvin, > > The problem is to assign one of 5 colours for each of the integers from > 1 upwards such that if any two integers have the same colour the > integer that they sum to must have a different colour: > > eg. > If we have > 1-green 2-blue 3-green > > then 4 cannot be green or blue as 1+3 = 4 and 2+2 = 4. I can follow <> green, but <> blue seems to extend your rule ? > > A correct sequence of 160 digits for 5 colours is known. I wish to find > a sequence of 162 digits. So that's a string of ~160 characters, where each character can be one of 5 values ? > > I'm doing an exhaustive search on a severely restricted subset of all > the possible sequences. The sequence is built up one colour at a time > until you get to the point where you know that somewhere down the track > there will be no possible colour, then you go back one in the sequence > and try a different colour etc. > > This is quite easy to split up each chip at a time should only increase > the sequence by a few digits (due to memory constraints) and report > each possible final sequence back to the coordinating chip which dishes > each of these out to other chips when they request more work. Of course > there needs to be a prioritisation of sequences such that the sequence > queue doesn't get too big (ie. always dish out the longest sequences > which will be exhaustively searched and therefore removed quickest). > All of this stuff isn't too hard. > > To the points you made, > 1) efficient communication. Each chip needs to get a sequence per unit > of work which is no biggy, but it will need to report back each > sequence it ends up at..... This could in fact be quite a few - maximum > 5^(#of digits sequence was increased by) but usually a lot less. For > each of these it needs to return (sequence length)/2 bytes. I think I > will need to consider this point in some more detail... > 2) fault tolerance. I wish to find a single correct sequence and > believe (hope!) that there are many of these (expected running time is > time to first sequence not completion time for exhaustive search which > would in fact take 100s of years!!!), whilst missing one sequence due > to a faulty device/communications would be very bad this wouldn't be > disastrous. I'm not trying to say that no sequence of 162 digits > exists. This reads a little like sorting primes. The data set would certainly fit into a (very) small microcontroller = you can even pack into nibbles, and consume just 80 bytes, but the problem with many small uC will be ensuring there are no overlaps, or holes, in their scan coverage. ie the task is simple enough, but multi-uC management is likely to be a nightmare. Something like i2c for the backplane is also likely to be a serious bottleneck. > I don't know anything about FPGAs or how these would apply, do you > happen to have some useful links? Look at Altera, Lattice, Xilinx - there are many demo/eval boards and tool sets. Also look at the Soft CPUs : Xilinx PicoBlaze, and Lattice Mico8 FPGAs can do hugely parallel tasks, and on a small data set like this, you have no memory bandswidth issues. With a FPGA, you could do exclusion mapping - that is, do not store the Colour@integer, but instead have an array of N x 5 booleans, which are excluded colours. [ALL 5 => Whoops, go back! ] An FPGA could scan for all ahead exclusions, very efficently indeed. One of the small soft CPUs could manage the re-seed process. <paste> > I've got a pure math problem implemented in C that will take about 3 > years to solve using all 5pcs available to me (the algorithm is about > as efficient as it will get without some major mathematical insights). The algorithm is always where the biggest speed gains can be made, especially in efficently mapping the algorithm to the hardware it runs on. In a FPGA you could set up 'algorithm races', where (eg) you code 4 algorithms in ~1/4 of the chip each, and run it for a couple of days, and compare their Attained String Lengths. If the present best is a langth of 160, don't just think about 162, look to smash it ! :) I've added comp.arch.fpga, as this really sounds more like a FPGA+smart algorithm, than a "sea of uC" problem. -jgArticle: 90467
Hello Group, I just received the Xilinx ML403 development kit. Wow what a board ! I especially like the high speed SMA(?) connectors. Looks like gold plating. I have downloaded the 7.1.04 service pack and the newest IP. I also went through all the click-through demos that come on the Compact Flash. Very nice. The Quick Start guide then just kind of drops off. Where does a starter go from here? I would like to try my VGA test patterns, from a previous job, on the new platform, especially considering the post in this group about the quality of the VGA signals. But where are the hooks to download a VHDL ISE project onto this new device? Do I do it from XPS or from the ISE ? I would like to get some hardware downloaded onto the fabric before I start learning about the busses, addressing, and such, of the software side of this thing. ISE 7.1.04 doesn't show me Virtex 4 parts. Any information to the group about your early experience with the ML40x line might be of great use to me and the rest of the readers. Thank you, Brad Smallridge a i v i s i o n . c o mArticle: 90468
Since I am the oldest Xilinx veteran here ( Jan 88), I can answer with authority: The first part, in 85, was the 2064 (named after the number of CLBs in the matrix), followed soon by the 2018, named after the (contentious forever) number of gate equivalents. The 3000 series was introduced in the following sequence (sorry Austin, I was there): 3020 in late 87, 3090 was the second (!) in mid 88. 3042 came soon after and becamemost popular, then (early 89?) the 3062 as the last-born and forever least popular. The 0riginal 3090 die was exactly 100 square-millimeters, but it was not 10x10 since we wanted to fit two masks into the biggest possible square reticle, so it was something like 12.5 x 8 mm, and we proudly depicted it (to scale) on the back of the data book. Xilinx has, forever since, always pushed manufacturing to offer the biggest possible top-end device, because we know that there are designers salivating for something even bigger, and the unavoidably high price is grudgingly accepted when there is no alternative. Process shrinks were done more quietly in those days, since they did not affect the user with a change in supply voltage. Those were the 5-V days, when everybody used the same Vcc :-) 3000A was a functional superset, and 3100 offered higher speed through "pumped gates", internally generating a higher Vcc for certain circuit detailss. Also a new top-end, the 3195. I wrote a candid comparison of the various 3000 families and published it at the front of the family datasheet, with an innovative 3-dimensional picture... See: http://direct.xilinx.com/bvdocs/publications/3000.pdf Peter Alfke, Xilinx ApplicationsArticle: 90469
Brad wrote: "I especially like the high speed SMA(?) connectors. Looks like gold plating. " Yes, they are SMA, and the plating is "pure 24 carat gold, several thousand nanometers thick" as they might say in marketing, using in-terminolgy... PeterArticle: 90470
Many of your questions are answered at the following URL. (Also may answer some questions you have not asked). http://www.fpga-faq.org/compare/build_form.cgi I have also answered to the best of my knowledge below: On 13 Oct 2005 13:04:40 -0700, "Mike" <almost_rational@yahoo.co.uk> wrote: >Hi all, > >1) When first introduced in 1985, which of the XC2000-series devices >(2064, 2018) were actually made available? 2064 first (64 LCA Cells), 2018 next (1800 gate equiv. the start of the great gate counting debate) >2) When first introduced in 1987, which of the five XC3000-series >devices (i.e. 3020, 3030, 3042, 3064 and 3090) were actually available? >When did the 3090 finally arrive? 3020 was first, then I think 3090, then the rest. >3) What year were each of the XC3000A, XC3000L and XC3100 families >introduced in? See the URL above >Thanks, > >Mike Have fun, Philip Philip Freidin FliptronicsArticle: 90471
Has anyone gotten impact to work on Linux with the Platform USB Cable?? If so, what version of the kernel were you using? What, if any debug did you have to do to get it to work? I just got the Cable yesterday, and have so far been unable to get impact to even find the device. I know the drivers are installed and the firmware is loaded. Does it really work on Linux? Thanks, ChrisArticle: 90472
Synplify DSP can take simulink model to optimized (folding/retiming ...) RTL. A couple of links: http://www.synplicity.com/products/synplifydsp/index.html http://www.synplicity.com/literature/pdf/ss_signalcrafters05.pdf vssumesh wrote: > Any body please suggest any good tool to convert simulink model to > equivalent RTL. >Article: 90473
"kmlpatel@gmail.com" <kmlpatel@gmail.com> writes: > What's more than likely going on is that you were given either: > > 1. An evaluation Registration ID > 2. A Foundation Simulator Registration ID > > Since both of these configurations are not currently supported on > Linux, Will the Foundation ISE Simulator be available on Linux in a future release (perhaps 7.2i)?Article: 90474
Thanks for the tip. Looks like the ARM will do. I see a bus interface unit (presumably an AHB unit) as well. This may do the trick. Thanks.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z