Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
> So the error is shown below for rest of the pins from 7 to 17. The error > shows the same error for signal SR_DATA_IO<7> : > > Annotating constraints to design from file "../VIR3_top/VIR3.ucf" ... > ERROR:NgdBuild:755 - Line 49 in '../VIR3_top/VIR3.ucf': Could not find > net(s) > 'SR_DATA_IO<7>' in the design. To suppress this error use the -aul > switch, > specify the correct net name or remove the constraint. > > What's your suggestion in this lieu. As the error message suggests, open the ucf file and comment( in order to remove) all the constraints concerning IO ports ( that is pads locations ) that are not used in the code. So if you see the statement NET "SR_DATA_IO<7>" LOC = "pad_name"; in the ucf file and SR_DATA_IO(7) is not used in the code ( that is bit 8th of the inout port SR_DATA_IO:std_logic_vector(17 downto 0) is not connected inside your VHDL module ) you have to remove this LOC constraint by deleting or commenting the statement. Regards, DanRArticle: 56701
"Suhaib Fahmy" <sf199@doc.ic.ac.uk> wrote in message news:Pine.LNX.4.50.0306101252440.21396-100000@kiwi.doc.ic.ac.uk... > I am using the Coregen Dual-Port Virtex II Block Ram. In Coregen, you > can simply enter a 0 and it should initialise all your entries to '0'; > But when I simulate (functional) in Modelsim I get 'X's. I have also > tried using the Memory editor to create a COE file, and it created a > .mif, which was referenced in the component declaration. But still in > Modelsim, I get X's. Is there something else I should be doing? > > Thanks. > > Hi, Please define what you mean with "functional" simulation, to avoid miscommunications. Is this on RTL level (pre-synthesis) or on gate-level (pre-layout or post-layout)? How do you "measure" the X's in Modelsim? Is this a wavetrace of the I/O's ports/signals of the DPRAM, or something else? Please note that there can be multiple reasons for a signal to go to X, especially when the "resolved datatype" std_logic is used. For instance, driving a 1 and a 0 onto a signal at the same time, will also result in a X, among other things. Timing violations at gate-level can also result in the generation of an X. Best regards, JaapArticle: 56702
"Martin Sauer" <msau@displaign.de> wrote in message news:3EE6C7F1.6020702@displaign.de... > Hello, > > I want to program a Xilinx CPLD XPLA3 device via JTAG with a > microcontroller. I found on the Xilinx Homepage the appnote with the ... > use the XSVF file which will be created from the Xilinx iMPACT 4.2 Tool. > The code which is created by the XPLA Programmer 4.14 won't be work. > If I do a program and verify in one step the CPLD will be died. > Is there a explanation for this behaviour? Been there, done that: I too created a erase/pogram/verify file and managed to fail it two thirds through, resulting in the XPLA3 drawing some 300 mA with no I/O pins connected. You might be able to rewive it by using the PORT_EN pin and a beefy enough power supply, and then erasing it. First of all, get the latest version of the appnote code, and read the textfile that acommpanies SVF2XSVF. For XPLA3 you need to add some unique parameters, unlike the 9500 which requires none. When you use the 4.2 iMPACT tool (or was it 4.1 I don't remember), and do not have the proper parameters on SVF2XSVF, the result is a broken erase operation, and several other things go wrong too. With iMPACT 5.1(?) from the web edition you can generate a somewhat better SVF file, only when you run SVF2XSVF you may get an error "Invalid stable state". At least in my implementation that particular line can be removed with no ill effect. /KasperArticle: 56703
Hi Lis, > I also read that I can use a tcl testbench that provides the > stimulus. This I have absolutely no experience in. What is > the relative advantage/disadvantage of using a tcl vs. VHDL, > if I'm sure that I will be using Modelsim forever? This is a tricky one. Basically, any command you enter into Modelsim is a TCL command, so a series of FORCE commands in a script would drive your design well enough. However, the simulator needs to be started and stopped every time a command gets interpreted, so if you'd write many lines like FORCE inputvector "100101010010101000101001001010101010" RUN 1000 ps in a .do file you'd get horrible performance. There is a workaround that saves you a lot of restarts by specifying multiple input vectors separated in time: FORCE siggy "10001" 0 ps, "110110" 1000 ps, "10011" 2000 ps, .... RUN 3000 ps which will reduce the overhead significantly, but its usefulness slightly depends on the width of your input vector and the (undocumented) maximum input buffer size of the TCL interpreter ;-) A great advantage of using TCL and VHDL is that you can perform actions based on the status of any internal node using the WHEN command, such as reporting it, changing other signals based on an internal node's state etc, even creatng and controlling a GUI throught the simulator - even though I don't think that that's what you want - you probably want it to be over as soon as possible ;-) As stated, your mileage may vary. The TCL text interpreter is generally faster than reading in formatted text in VHDL, but the starting-and-stopping of the interpreter may be a bottleneck. On the other hand, you can queue events until way in the future in separate statements, yielding enormous performance boosts, but it's up to your clever coding to achieve this boost. I know a Mentor FAE in the UK who is quite well-versed with TCL and who may be able to help you a bit more, but you'll have to contact me privately for his address, as I don't want to expose his address to the spam address reaping bots. Best regards, BenArticle: 56704
Thomas Heller wrote: > I should probably read the data sheet, but... > > Is there also a serial-to-parallel converter which works at this > clock-rate? You bet, that's why it's called a multi-gigabit transceiver, or also a serdes (serializer-deserializer) For incoming data you should guarantee transitions at "not-too-long intervals", and that's where 8B10B encoding comes in. > > And is this also in the spartan III? Nope. Remember, Spartan does everything to be small and inexpensive, while Virtex does everything to be sophisticated and fast and big. But Virtex2Pro is actually less expensive than Virtex2, strange as that may sound. You get more for less... Peter Alfke, Xilinx > > ThomasArticle: 56705
Peter Alfke <peter@xilinx.com> wrote: : But Virtex2Pro is actually less expensive than Virtex2, strange as that : may sound. You get more for less... But not yet. Distributors list V2P at most as engineering samples now... Bye -- Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------Article: 56706
If the board has at least two of the MGTs brought out, you can run your loopback also through an external cable, or even cable + pc-board traces + cable, if you want to try things out. Peter Alfke, Xilinx Applications Jaap Mol wrote: > > Hello Bram, > > I have seen some evaluation boards on the Xilinx website with VirtexII PRO > and DDR memory. About the LVDS link, do you need a physical connection of a > specific minimum length (for instance including cable?), or is a "loopback" > from transmitter to receiver (RX) also sufficient for the evaluation you > want to do? Using a "loopback" will of course prevent you from having to buy > 2 evaluation boards, just to see if you can get the LVDS link running.... > For instance, have a look at the Virtex-II XC2V4000 XP Development Kit, it > has got both LVDS and DDR. > > I guess you already consulted Kees van Egmond at Avnet/Silica? > > Best regards, > > Jaap > > P.S. Please do my greetings to Stefan van Beek, you probably know him.. > > "Bram van de Kerkhof" <bvdknospam@oce.nl> wrote in message > news:1054561920.762580@news-ext.oce.nl... > > Hello, > > > > I'm looking for an evaluation board of the Virtex 2 (actually for the > > Spartan 3 but as there are none available i will have to verify on the > > Virtex 2) > > I want to verify a ddr-sdram and 300 Mb's lvds link design. Two > avaluation > > boards is also ok (one for ddr and one for lvds). > > > > Who has some idea's ? > > > > Yours Bram > > > > -- > > ================================================== > > Bram van de Kerkhof > > > > OCE-Technologies BV > > Building 3N38 > > > > St. Urbanusweg 43, > > Venlo, The Netherlands > > P.O. Box 101, 5900 MA Venlo > > ================================================== > > Direct dial : +31-77-359 2148 > > Fax : +31-77-359 5473 > > ================================================== > > e-mail : mailto:bvdk@oce.nl > > ================================================== > > www : http://www.oce.nl/ > > ================================================== > > > >Article: 56707
That godforsaken acronym ES = engineering sample. :-( It should really be interpreted as ES = early silicon. "Engineering Sample" conjures up a picture of a chip that is handed out for free, that may or may not work, most likely doesn't work at temperature, etc. That's NOT what we mean with ES. These are parts that have not yet passed all our standard qualification exercises ( burn-in, shake-rattle-and-roll, etc.). They are good parts, work over temperature, and are fully functional, unless the errata sheet tells you whatever does not work. And we are proud enough of them to accept money for them. But we suggest that you don't ship them in your production equipment, for the reasons mentioned above. It is true that V2Pro devices are not yet as available as V2 devices are, but the list price is lower for V2Pro, demonstrating our commitment to this line. Peter Alfke ========================== Uwe Bonnes wrote: > > Peter Alfke <peter@xilinx.com> wrote: > : But Virtex2Pro is actually less expensive than Virtex2, strange as that > : may sound. You get more for less... > > But not yet. Distributors list V2P at most as engineering samples now... > > Bye > -- > Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de > > Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt > --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------Article: 56708
Peter Alfke wrote: > > That godforsaken acronym ES = engineering sample. :-( > It should really be interpreted as ES = early silicon. I thought ES meant 'Errata Soon' :) ie shorthand for "Errata comming real Soon, as soon as the customers tell us what's not working" ? -jgArticle: 56709
Prasanth, For the reset, I think I read somewhere that the GSR (global set/reset) will be used automatically if you do not use an asynchronous reset anywhere in your design. Any use of an asynchronous reset will cause ISE to route your indicated reset without use of the GSR. That will then be the only logic that gets reset. And for the CLKDLL, you have to use either the CLK0 (0 degrees phase) or CLK2X (doubled) output for feedback. ISE will set up the CLKDLL appropriately. Remember to set the divisor ratio for CLKDV. IIR, the default was divide by 2. For startup_wait, see www.xilinx.com/xapp/xapp174.pdf Marc Prasanth Kumar wrote: > I'm playing around with a Xilinx Spartan 2 development board > and have some questions regarding the global reset and clock. > > For the reset, do we need to have a reset signal explicitly > in our code? I've seen code examples in the Xilinx manuals > but they never tell where to connect the reset signal at the > top level. Also the manuals no longer recommend using the > STARTUP module (plus this module has no output signal!) Does > anyone have a good code template I can base upon? Could I > also connect a external button input for an optional manual > reset? > > Secondly for the clock, I want to divide out the clock signal > using the CLKDLL module's CLKDV signal because the external > clock oscillator is too fast for my module. To do this, do I > still connect the CLKO back to the CLKFB signal like usual > for an DLL? And if I use the STARTUP_WAIT option, does this > delay the global reset until the clock is stabilized? > > > -- Marc Guardiani To reply directly to me, use the address given below. The domain name is phonetic. fpgaee81-at-eff-why-eye-dot-netArticle: 56710
If you have adobe distiller, then everything runs easily. However, if you use postscript, choose print postscripts then choose Printer Filename .. Instead of Printer Print command. cheers Basuki keren -----Original Message----- From: Manfred Kraus [mailto:news@cesys.com]=20 Posted At: Wednesday, June 11, 2003 8:49 PM Posted To: fpga Conversation: A way to copy Modelsim waveforms into word documents Subject: Re: A way to copy Modelsim waveforms into word documents Thank you for your hint. When I try to print to postscript, I get an error message. Do you know if "lp" has to be installed seperatly ? Is it part of windows or part of Modelsim ? Error Message: # Trace back: Postscript write failed: couldn't execute "lp": no such file or directory # while executing # "PrintWave -postscript -dialog -win .wave" # (menu invoke) # 2: tkerror {Postscript write failed: couldn't execute "lp": no such file or directory} # 1: bgerror {Postscript write failed: couldn't execute "lp": no such file or directory} "Dave Farrance" <davefarrance@yahooERASETHIS.co.uk> schrieb im Newsbeitrag news:j05eevgd0dga7fsvt2k4d08njju6sbq102@4ax.com... > "Manfred Kraus" <news@cesys.com> wrote: > > >I have to write the documentation for several designs. > >Is there a good method to include Modelsim waveforms > >into word-documents ? The cut-and-paste method doesnt > >work good, because the labels become unreadable when sizing the=20 > >picture. > > Try the print-postscript/save-to-file feature. I don't know if Word=20 > can directly import ps files, in which case you'd need GSview and=20 > Ghostscript (free tools) to convert the file to eps or bitmap. > > -- > Dave FarranceArticle: 56711
On Wed, 11 Jun 2003 10:24:13 GMT, Rene Tschaggelar <tschaggelar@dplanet.ch> wrote: >Allan Herriman wrote: >> >> Connecting the (Fibonacci) LFSR internal state to a DAC in parallel >> can be modeled as a 1 bit LFSR followed by an FIR filter. The FIR >> filter taps are just the DAC bit weights. >> >> If you are shifting LSB to MSB, the impulse response is: >> >> 1/128, 1/64, ... 1/2, 1. >> >> If you are shifting MSB to LSB, the impulse response is: >> >> 1, 1/2, ... 1/64, 1/128, which is the time reversal of the other >> impulse response (which means that the magnitude response is the same, >> but the phase has been reversed). >> >> This response is equivalent to a single pole low pass filter. (If >> that's what you actually wanted, it's much cheaper just to use a >> single bit output of the LFSR and an RC LP filter.) >> >> >> As the other posters said, this isn't the way to make a random voltage >> generator. What are you actually trying to achieve? > >Thanks for all replies this far. > >A random voltage generator. With a variable clock, such as the LT6900, >which can generate from 1kHz to 30MHz, and the property of the LFSR >to have a spectrum between the clock and the clock divided by the >number of bits, If you connect up a spectrum analyser to the output of an LFSR, you will see that the spectrum has a sinc squared shape, and has significant power well beyond the clock frequency of the LFSR. Changing the clock frequency will move the nulls around, but won't change the total power, and won't change the shape of the spectrum at frequencies much less than the clock frequency. In theory it's a line spectrum, with the lines separated by Fclk/(2^N-1), but you will be unlikely to see the lines in practice, given reasonable values for the resolution bandwidth of the analyser, Fclk and N. > a system can be tested without a sweeper. If you are interested in MLS testing you don't need a DAC. It is sufficient to LPF the serial output of the LFSR. Ask in news:comp.dsp for more details, or read this: http://www.dspguru.com/info/tutor/mls.htm Regards, Allan.Article: 56712
Hello, I'm a bit new to FPGAs. I am in the process of learning VHDL and I am interested in developing my own RISC CPU. My ultimate goal is to get a full system running capable of I/O with standard PC components (ide, usb, ethernet, PS2, video, etc). I'm hoping to eventually load an OS on top of my chip and run my own programs. I've done most of the work on the OS and applications already. I'm now turning towards the hardware (i need to get that developed before I can implement the lowest level of the software). I am on somewhat of a budget (this project is just for fun) so I'm looking to spend at most about $500 for a board and FPGA. Which FPGA do you recommend? Xilinx seems to have the most options (the Spartan series has lots of boards for it and it looks to be pretty affordable). What board do you recommend (short of a custom) that will offer the best fit for the job (similar to a PC motherboard). All help is greatly appreciated! ThanksArticle: 56713
"Andrea Sabatini" <sabatini@nergal.it> wrote in message news:<bc7eft$26am$1@newsreader2.mclink.it>... > Hi all, > > I tried without any settings (I create a new project and I only specified > the target device and the libraries) and it didn't fit because it was not > able to route all the nets. > > I checked all the global nets as Giuseppe Giachella suggested, but they are > the same that I had in the MaxPlusII poject. > > I even tried to let QuartusII to do the synthesys but I did not obtain > anything good. > > In my project I am using a PCI core by PLDA (master-slave v5.12), is it a > clue for someone? > > Any other ideas? > > Regards, > > Andrea Sabatini In order to understand the differences between the two flows (Maxplus+ Synopsys , Quartus +Symplify): the netlist imported in Quartus was an "atom" one ? In the past Maxplus2 wasn't able to import atom netlists, so I suppose that the Maxplus flows started from a "non-atom" netlist (by the way, Altera FAE always told me Quartus prefers atom netlists). Does the error message always refer to specific parts of your design ? If this is the case, you can try to exclude them ( modifying the original source code) to isolate the problematic area. Does the error message always refer to specific resources inside the Fpga ( X clocks/clear/presets available but X+N necessary in LAB_A_B....) or to interconnects among LAB/EAB ? Again, hope this helps Giuseppe GiachellaArticle: 56714
while i try to compile a project i get the following error : ERROR: a gnd net is driven by primitive gate(s) -- NET: x_notri/ram_ce_n DRIVING GATE: x_notri/p_notri_process_quantum_timer/reg_i_dup_27 DRIVING GATE: x_notri/p_notri_process_quantum_timer/reg_i does anyone know what this means?? how can i fix it??Article: 56715
Hi, Is there any compiler for SystemC which can compatible with Xilinx ISE 5.1i ?? TerrenceArticle: 56716
"Martin Euredjian" <0_0_0_0_@pacbell.net> writes: > Nothing would keep you from producing DVI signals out of a V2 other than the > maximum channel frequency you can achieve. The range of 0 to about 200MHz > is attainable. Beyond that I don't have enough experience to tell you. > 200MHz would equate to a DVI link producing a 640x480 image at 60 frames per > second with about 10% H and V blanking. Get up to 1024x768 and you need to > output about 600MHz per channel. I don't think V2 can do that. I think > that's where V2Pro comes into play. > Using what sort of IOB? None of the ones that I have seen look like the TMDS physical layer, which I will attempt to draw (from page 33 of the Digital Display Working Group DVI spec Rev 1.0 - from http://www.ddwg.org/downloads.html): VCC __|_ / / \ \ / / __________________________ \ \ __________ ___|___|___|\ | | |+\___ | ______ Xmission line, Shielded _______|___|-/ | | Twisted Pair |/ | | __________________________ | | | \ _ D | \ D | | |___| | | \|/ V current source! | | /// Does that make any sense to anyone!? Cheers, MartinArticle: 56717
"Martin Euredjian" <0_0_0_0_@pacbell.net> writes: > The parallel-data-in to serial-10bit-coded-data-out and LVDS is, shall we > say, trivial. The issue is output bitstream clock rate. If you need DVI > beyond the lowest resolutions you just can't do it on V2 because you are in > GHz territory or very close to. I'm not sure how fast you an get a V2 LVDS > output to go (I only have about a year and a half worth of experience with > FPGA's and the fastest I've gone is 165MHz) but I know it ain't in the GHz > region. Maybe 200 to 300MHz, max? The datasheet shows samples of (simple) > internal functions running upwards of 400MHz. > Peter? > Indeed - it is merely the Physical layer that is causing us the trouble - the DVI spec doesn't look like how I understand LVDS to be. LVDS has a current source at the top of two totem poles, with the top half of one pin on when the bottom half of the other is. And termination across the pair, rather than each line to VCC. DVI looks more like (P)ECL (maybe?) I posted an ASCII art of the DVI PHY somewhere else on this thread. Cheers! Martin -- martin.j.thompson@trw.com TRW Conekt, Solihull, UK http://www.trw.com/conektArticle: 56718
Hi, unfortunately the PLDA IP is not synthesyzable from the synthesys sw (I think because we do not have the license), so I have to 'pass it' as is at the p&r tool. regarding the netlist, the 'atom' possibility is not valid for the devive I am working with (ACEX1K). regarding the unrouted nets, it seems to me that they are uniformly spread all over the design (in the last test I did I obtain 662 unrouted nets!). regarding the specific resources that generates errors, the are the following: . I/O pin (nets coming or going to) . logic cell . embedded cell (I do not think I have control on memories!) AndreaArticle: 56719
Patrik Eriksson wrote: > > > Peter Alfke wrote: > >> "parity bit" is just a name, suggesting one possible application. We >> might have called it "extra bit", or "Xilinx Ninth"... >> Use to store parity must be very rare. >> >> Peter Alfke >> > > Why not call them data bit(s). E.g. D[31:0], DP[3:0] => D[35:0]. This > should make it much easier for synthesis tools to use the 'parity' bits > which e.g. synplify doesn't do today. If you try to use a 33 bits wide Try Synplify 7.3 > and 512 entries deep memory today the synthesis will end up with two > BRAMs instead of one. This is waist of memory! Of course you could > manually instantiate the RAM primitives but why should you do the work > that the tools should do? > > /Patrik > > >Article: 56720
Hi All, You are right, the signalling used in DVI is not exactly lvds, it is current mode, but, as Peter Alfke mentioned in the other thread on this subject, the v2 differential drivers are highly programmable and I think that with some tweeking they can be made to output a signal that is acceptable by an DVI receiver chip. As far as the data rate concerns Martin E. had, v2's SERDES is more than capable of handling the data rates that are neccessary for DVI links (upto 3Gbits/s as far as I know should be fine with a v2). Overall, I still stand by what I said before, I am not SURE that it can be done, but I am closer to concluding that it can than that it cannot. A good course of action might be to send a Silicon Image AE an e-mail and ask whether they (or any of their customers) got the v2 to output DVI data that one of their PaneLink receiver chips can accept. Cheers, Ljubisa Bajic ATI Technologies --- My opinions do not represent those of my employer. --- Martin Thompson <martin.j.thompson@trw.com> wrote in message news:<uptljr6f1.fsf@trw.com>... > "Martin Euredjian" <0_0_0_0_@pacbell.net> writes: > > > The parallel-data-in to serial-10bit-coded-data-out and LVDS is, shall we > > say, trivial. The issue is output bitstream clock rate. If you need DVI > > beyond the lowest resolutions you just can't do it on V2 because you are in > > GHz territory or very close to. I'm not sure how fast you an get a V2 LVDS > > output to go (I only have about a year and a half worth of experience with > > FPGA's and the fastest I've gone is 165MHz) but I know it ain't in the GHz > > region. Maybe 200 to 300MHz, max? The datasheet shows samples of (simple) > > internal functions running upwards of 400MHz. > > Peter? > > > > Indeed - it is merely the Physical layer that is causing us the > trouble - the DVI spec doesn't look like how I understand LVDS to be. > LVDS has a current source at the top of two totem poles, with the top > half of one pin on when the bottom half of the other is. And > termination across the pair, rather than each line to VCC. > > DVI looks more like (P)ECL (maybe?) > > I posted an ASCII art of the DVI PHY somewhere else on this thread. > > Cheers! > > MartinArticle: 56722
Mandilas Antony wrote: > while i try to compile a project i get the following error : > > ERROR: a gnd net is driven by primitive gate(s) -- NET: http://groups.google.com/groups?q=driven+primitive+default+reg_col -- Mike TreselerArticle: 56723
The LVDS outputs from Virtex2 can output up to 800 Mbps, and are heading towards 1Gbps. The MGT outputs of Virtex@Pro do anything up to 3.125 Gbps every day, even driving up to one meter of pc-board, or long cables. That's exactly what they were designed and characterized for. If DCI requires a particular PCI-like driver, I think that can be supplied by a tiny (ECL-gate?) chip. Should not be a stumbling block... Peter Alfke, Xilinx Applications Ljubisa Bajic wrote: > > Hi All, > > You are right, the signalling used in DVI is not exactly lvds, it is > current mode, but, as Peter Alfke mentioned in the other thread on > this subject, the > v2 differential drivers are highly programmable and I think that with > some > tweeking they can be made to output a signal that is acceptable by an > DVI > receiver chip. As far as the data rate concerns Martin E. had, v2's > SERDES is > more than capable of handling the data rates that are neccessary for > DVI links > (upto 3Gbits/s as far as I know should be fine with a v2). Overall, I > still > stand by what I said before, I am not SURE that it can be done, but I > am > closer to concluding that it can than that it cannot. A good course of > action > might be to send a Silicon Image AE an e-mail and ask whether they (or > any of > their customers) got the v2 to output DVI data that one of their > PaneLink > receiver chips can accept. > > Cheers, > Ljubisa Bajic > ATI Technologies > --- My opinions do not represent those of my employer. --- > > Martin Thompson <martin.j.thompson@trw.com> wrote in message news:<uptljr6f1.fsf@trw.com>... > > "Martin Euredjian" <0_0_0_0_@pacbell.net> writes: > > > > > The parallel-data-in to serial-10bit-coded-data-out and LVDS is, shall we > > > say, trivial. The issue is output bitstream clock rate. If you need DVI > > > beyond the lowest resolutions you just can't do it on V2 because you are in > > > GHz territory or very close to. I'm not sure how fast you an get a V2 LVDS > > > output to go (I only have about a year and a half worth of experience with > > > FPGA's and the fastest I've gone is 165MHz) but I know it ain't in the GHz > > > region. Maybe 200 to 300MHz, max? The datasheet shows samples of (simple) > > > internal functions running upwards of 400MHz. > > > Peter? > > > > > > > Indeed - it is merely the Physical layer that is causing us the > > trouble - the DVI spec doesn't look like how I understand LVDS to be. > > LVDS has a current source at the top of two totem poles, with the top > > half of one pin on when the bottom half of the other is. And > > termination across the pair, rather than each line to VCC. > > > > DVI looks more like (P)ECL (maybe?) > > > > I posted an ASCII art of the DVI PHY somewhere else on this thread. > > > > Cheers! > > > > MartinArticle: 56724
Hi, I have some doubts in using External memory controller(EMC) for Off chip memory in Xilinx Embedded Development Kit (EDK). Q:1 -> Can I use Off chip memory as CODE MEMORY to store my Code? Q:2 -> If yes, than how to initialize this external RAM, used as CODE MEMORY, with my C code using EDK ? Q:3 -> I don't find any API in given emc.c file to write/read to/from external memory using EMC. I am expecting some API in C, which will control activities of MB on OPB bus. Just same as API given in gpio.c file to write through GPIO.(XGpio_DiscreteWrite/read)... TIA Viral
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z