Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
<praveen.kantharajapura@gmail.com> schrieb im Newsbeitrag news:47cf10b7.0504130430.9a34497@posting.google.com... > Hi all, > > This is a basic qustion regarding SDA and SCL pins. > Since both these pins are bidirectional these should pins need to be > tristated , so that the slave can acknowledge on SDA. > > But i have seen in some docs that a '1' need to be converted to a 'Z' > while driving on SDA and SCL, what is the reason behind this???? > > Thanks in advance, > Praveen well in order to drive '1' (EXERNAL RESISTIVE PULLUP) you need to Z the wire eg tristate it. 0 is driven as 0 1 is driven (or relased) as Z, ext pullup will pull the wire high AnttiArticle: 82476
<praveen.kantharajapura@gmail.com> wrote in message news:47cf10b7.0504130430.9a34497@posting.google.com... > Hi all, > > This is a basic qustion regarding SDA and SCL pins. > Since both these pins are bidirectional these should pins need to be > tristated , so that the slave can acknowledge on SDA. > > But i have seen in some docs that a '1' need to be converted to a 'Z' > while driving on SDA and SCL, what is the reason behind this???? > > Thanks in advance, > Praveen They're not tri-stated as such, merely a pull down or open collector output, with an external resistive pull-up. If using a micro or FPGA then set the output to a "0" and use the tri-state enable to turn the pin on or off.Article: 82477
Fcclk max = 100 MHz bitstream of VLX25 = 7.4 Mbit SelectMAP port width = 32 bits so the minimum reconfiguration time for this part should be a little bit more than 7.4/100/32 = 2.3ms correct? Is the same CCLK freq sustainable thru ICAP? ThanksArticle: 82478
Hello, I am using virtex 2 pro for a design of mine. When i am implementing the synethesized edf using ISE it is giving hold time violation on input clock for any frequency given during synthesis. I have given the clock to usaual IBUFG followed by BUFG but its giving delay..i guess so... How can i eleminate this problem? I have been integrating IP cores but the problem is some of the IP core works fine but when i do minor change on some other module and synthesize....the working IP stop working.... There is no resource problem as i am using only 20% of it and also my clock speeed is 20 MHZ. Can some one point where may be the problem ? Thanks and regards WilliamsArticle: 82479
>>>I am looking for some information about how "real" this soft CPU >>>technology is. I'm working with someone who has become enamored with >>> >>>- What are the compelling reasons to go this route? >> >> 1) no obsolence >> 2) build your system with the peripherals and functions you need >> 3) design hardware after its has been manufactured to speed up time to >> market, the hardware is only bitstream and can be updated softly, also >> you >> can rework early design errors without the PCB changes >> 4) flexibility, design to be future safe, new hardware features can be >> added >> after product hardware is manufactured >> 5) etc.. > > As a practical matter, don't all of these points also apply to an external > uC with an FPGA as a peripheral? Also, I'm a little perplexed by point 1 - > no obsolescence - don't FPGAs families become obsolete just like anything > else? Or do you mean something else? > > -Jeff - Most FPGAs run in high-volume, uCs often have a lot of drivates, the ones that are not selling well are often canceled. - If the FPGA is finally obselete, it will be pretty easy to change the design to a newer product, at least if you have the VHDL-source of the core. - In a way you are also right that this is a marketing argument, especially if you have no source-code. You could e.g. say, that "Nios I" is already obsolete, i.e. no longer good supported... I think two further important advantages of soft-cores are: 5) reduced board-space 6) reduced costs (if the design is right. if it is not, you can run into additional costs by the need of a larger FPGA.) Thomas www.entner-electronics.comArticle: 82480
Ankit Raizada wrote: > I am just wondering if i simulate a design given in verilog using a > test fixure in a modern simulator like ModelSim and the outputs are > verified, what are the chances that the design will still not work in > the actual FPGA assuming it fits and Place and Route is successful. > > What are the factors that make this difference and how can i catch them > in the design cycle. > > I am actually creating few designs for DSP algos for my acadmic > project, and being a beginnner in this whole DSP over FPGA I find it > rather difficult to decide wather to call a successful simulation a > milestone in the design cycle or not. > > Please share your experiences and ideas on this Howdy Ankit, Assuming the test bench is at least mildly exhaustive (and why would you bother if it wasn't?), successful simulation is most definitely a milestone out in the real world. It lets you know not only that the high level architecture of the design is sound, but also that the logic being used to implement the design is mostly correct. Lastly, if (or rather, when) a bug is discovered in the lab that the simulation didn't catch, you can go back to the simulation and increase your coverage, looking for other problems in addition to the one that was found in the lab. Then when the bug is seen in simulation, you can fix it, resimulate, and verify that you haven't broken some other part of the design, or uncovered another bug that was hiding behind that one. The toughest part is simulating unforseen and sometimes hard to reproduce interactions with other devices outside the FPGA. But even in those cases, it is an invaluable tool. It is also usually considerably faster to debug with than running through the place and route tools. To answer your first question, if the design meets timing as well as input and output constraints, and the the designer follows FPGA design guidelines (most important one: using global clock nets and no gated clocks), the chances are very high that the design won't fall flat on its face the first time it is put in a device. If I had to guess the most common trouble spot that people don't catch in simulations, it would be clock domain crossing problems - unless they used a FIFO. Very well worded question! MarcArticle: 82481
Hi all, I am new to FPGA architecture. To know the what does LUT means I came across few good articles posted in this group. But still not completely clear about it. A 4-bit input LUT in spartan 2 can be implemented as Function Generator or as 16-bit ROM ... accepted . But I cant imagine it as a 16-bit RAM or a shift Register. If it is a 16-bit Ram / Shift Register then where is the input port. I think input port to LUT is only available during configuration of FPGA but not to the designer. Also please suggest me some good books to know the architecture of FPGA. Thanks . -- Mohammed A Khader.Article: 82482
"Stephane" <stephane@nospam.fr> schrieb im Newsbeitrag news:d3j43r$e32$1@ellebore.extra.cea.fr... > Fcclk max = 100 MHz > > bitstream of VLX25 = 7.4 Mbit > > SelectMAP port width = 32 bits NO, 8 bits > so the minimum reconfiguration time for this part should be a little bit > more than 7.4/100/32 = 2.3ms > > correct? NO, see above > Is the same CCLK freq sustainable thru ICAP? usually is ICAP way slower than max CCLK AnttiArticle: 82483
Mohammed A Khader wrote: > Hi all, > > I am new to FPGA architecture. To know the what does LUT means I came > across few good articles posted in this group. > But still not completely clear about it. > > A 4-bit input LUT in spartan 2 can be implemented as Function > Generator or as 16-bit ROM ... accepted . But > I cant imagine it as a 16-bit RAM or a shift Register. If it is a > 16-bit Ram / Shift Register then where is the input port. > I think input port to LUT is only available during configuration of > FPGA but not to the designer. That's the trick ;) A 'pure' LUT can't be shift register or RAM, but if you allow to interconnect the configuration logic with user logic, you can reconfigure the LUT content at run time (write port) and use the 'normal' LUT input as read port. For the shift register, I'm not sure but I think each 16 bits of memory inside a LUT is like a register, so when in a special mode, each input is connected to the output of the previous one, so you have a shift register and the "normal" LUT access allows you to "tap" somewhere in the chain. So yes a pure LUT can't be thos things, but the way theses reconfigurable LUTs are implemented in FPGA have tweaks so the can act as RAM/SRL ... SylvainArticle: 82484
Sebastian, I would agree with you, but we have to deal with thousands of customers, and there are some who look strictly at compliance with the written spec. Common sense and basic engineering knowledge does not always apply. That's why these strange old specs survive. Peter Alfke ============= Sebastian Weiser wrote: > "Peter Alfke" <alfke@sbcglobal.net> wrote in message news:<1113362657.039477.81570@z14g2000cwz.googlegroups.com>... > [LVTTL and LVCMOS33] > > In reality, the two types of outputs are the same, and "both" pull up > > to the rail. > > That were my guess, but I usually try to stick to the exact wording of > a specification, only to be sure. > > Wouldn't it be possible to write the more stringent values to the > Xilinx specs, independent of what JEDEC (or whatever) says? In some > corner cases (such as the ATA/ATAPI requirements) this may help a bit. > Currently I have a similar problem with a microcontroller > specification: The given values may reflect maximum worst case, but > are that pessimistic that they don't help at all. > > > Sebastian WeiserArticle: 82485
Hi. I specified a system-on-chip for 6Mpix. digital camera @ Philips Semiconductors and we did exactly what you intend to do but w/ an ASIC. On-the-fly video/image processing + JPEG compression + storage/communication interfaces + RISC processor requires ~ 1Mgates/ASIC that I would convert to 6-8Mgates/FPGA. I would go for a Xilinx XC3S5000 (Spartan-III family) or a Xilinx XC4VLX80/XC4VFX100 (Virtex-IV family). What you plan is probably feasible but it will require a lot of efforts. For the language, I would choose VHDL but I don't want to restart an endless war on which HDL languages is the best one ;-) My motivation would be a strong syntax which probably results in less bugs during the design. For such a big development, you shouldn't neglect the verification effort that you'll face. Eric "C. Peter" <die_les_ich_nicht@gmx.net> wrote in message news:<425a39bf$1@news.uni-rostock.de>... > Hi all, > > some years have gone since I did something with FPGAs and I am aware that > technology has moved forward significantly since the end of the last > century ... > > We now think about reading out some CCD sensors and doing image processing > with an FPGA. My questions to you: > - do you think this is feasable? > - which FPGA would you recommend? > - Which language would you recommend? > > We have used Xilinx and Handel-C so far and hence would prefer to stick to > them. But if there are good arguments against it we would certainly follow > them. > > Thanks a lot for your advice, > > ChristianArticle: 82486
Sylvain Munaut wrote: > Mohammed A Khader wrote: > >> Hi all, >> >> I am new to FPGA architecture. To know the what does LUT means I came >> across few good articles posted in this group. >> But still not completely clear about it. >> >> A 4-bit input LUT in spartan 2 can be implemented as Function >> Generator or as 16-bit ROM ... accepted . But >> I cant imagine it as a 16-bit RAM or a shift Register. If it is a >> 16-bit Ram / Shift Register then where is the input port. >> I think input port to LUT is only available during configuration of >> FPGA but not to the designer. > > > That's the trick ;) A 'pure' LUT can't be shift register or RAM, but if > you allow to interconnect the configuration logic with user logic, you > can reconfigure the LUT content at run time (write port) and use the > 'normal' LUT input as read port. > > For the shift register, I'm not sure but I think each 16 bits of memory > inside a LUT is like a register, so when in a special mode, each input > is connected to the output of the previous one, so you have a shift > register and the "normal" LUT access allows you to "tap" somewhere in > the chain. > > So yes a pure LUT can't be thos things, but the way theses > reconfigurable LUTs are implemented in FPGA have tweaks so the can act > as RAM/SRL ... > > > > Sylvain DO NOT CONFUSE LUT and CLB Sorry but a LUT is : Logic Unit Table -> a LUT can only do combinatory works (AND OR XOR ...). CLB is: Configurated Logic Block -> a CLB include one LUT followed by one Flip-Flop in minimum. For some Xilinx FPGA the CLB can be use as RAM or as SRL (not the LUT itself ;-) ) After that advice, please open a FPGA datasheet for understanding it specific architecture ! LUT - CLB - CLB interconnection - IOB ... regards, Laurent www.amontec.com ________________________ Your FPGA Design PartnerArticle: 82487
Let me correct this: A LUT is basically a 16-bit ROM, but Xilinx has added a few things inside the LUT. 1. Since the LUT content must be written by configuration, the ROM is really writable, and Xilinx then allows the user logic to also write data into it. That's the distributed RAM or LUT-RAM. 2. Since the LUT can be used as a RAM, it is not too difficult to convert it into a shift register, the SRL16, which uses a clever trick to act as intermediry storage between the shiftregister bits. All of this happens inside the Xilinx LUT (competitors do not have these extra features) The CLB then adds flip-flops, multiplexers, carry logic and interconnects. But I do agree with Laurent: Read the data sheet and the user manual. That's why we wrote them... Peter Alfke, Xilinx ApplicationsArticle: 82488
Ankit - In general, if the simulation says it works, it will work. Things that cause this to fail are: 1) synthesis didn't translate the design correctly. I've seen this on several occasions using XST, and I've also seen problems in the past with Design Compiler on ASIC projects. Synthesizers are made by people, so they have bugs too. Not too many, not often, but it does happen. 2) the HDL code has non-synthesizeable constructs, so it simulates fine but can't be "translated" into gates. 3) improper P&R constraints are applied, so the design can't operate at the desired frequency or else has odd setup/hold problems that show up when you least expect them. 4) design bugs that don't show up in simulation. This is probably the most common case. Maybe the testbench didn't exersize a section of logic. More difficult are clock-domain crossing problems that lead to metastable type problems. Clock crossing can also cause a variety of problems as you try to move data between clock domains. This is a topic in itself and, as far as I know, cannot be successfully simulated. For this class of problem, you have to make the design 'correct by design'. Clock crossing problems have been discussed in this forum many times, look for "metastable" as a good starting point. With good simulation, I'd call passing simulation a milestone! Hope this helps! John ProvidenzaArticle: 82489
praveen.kantharajapura@gmail.com wrote: > Hi all, > > This is a basic qustion regarding SDA and SCL pins. > Since both these pins are bidirectional these should pins need to be > tristated , so that the slave can acknowledge on SDA. No, both pins are not bidirectional. Only the master device drives the SCK line, and all slaves must leave their SCK's as input. > But i have seen in some docs that a '1' need to be converted to a 'Z' > while driving on SDA and SCL, what is the reason behind this???? > > Thanks in advance, > Praveen As others have said, usually SCK and SDA have a 1-10k pullup resistor to Vdd. This makes the signal a 1 while no device is pulling a pin low. So set the output pin to a zero, and toggle it as being an input versus an output, to generate your digital signal. Since SCK is never "tristated", you can just drive it as 0/1 output using the master device and omit the pullup resistor. Of course, the master device must be able to source and sink a few mA. The PIC line of microcontrollers have no problem doing this; the Atmels probably work the same. Furthermore, if the Atmels have a pin which is "open collector output only", then that would work for SDA without needing to tristate it. Just set it as an output and "0" will pull SDA low, and "1" will release it (alowing the pullup resistor to make SDA "1".) For power-hungry applications, you can increase the pullup resistors at the expense of speed and noise rejection. 100k works well for shielded, battery-powered applications. Cheers, MCJArticle: 82490
I was going to suggest using Base System Builder (BSB) to build an example project and then use the relevant parts about the Ethernet controller. However, when I look at the BSB file from their website, the Ethernet controller is commented out. (http://www.em.avnet.com/sta/home/0,4610,CID%253D13747%2526CCD%253DUSA%2526SID%253DNoNav%2526DID%253DADA%2526LID%253D6448%2526BID%253DDF2%2526CTP%253DSTA,00.html It's probably worth asking Avnet why that is since the whole point of BSB is to provide a starting point for user designs to help avoid this sort of issue... Paul Bertrand Rousseau wrote: > > Hi, > > I'm using EDK 6.3i under linux (release 12.3) and I'm trying to > instantiate an ethernet controller on my virtex 4 board (avnet > xc4vlx25). I'm always getting errors from the PAR tool saying that > timing constraints are not met, even if I try harder to synthetize. > > I've tried a lot of different options in synthetizing and place and > route tools, but nothing changes, and now I just don't have any idea of > what I could try. As I'm using the builtin ethernet controller, I'm not > supposed to modify anything in the design of it I suppose, so what > could I try? Is it possible to build a valid system by ignoring timing > constraints? > > I include the report of the PAR tool in the message: > > Starting initial Timing Analysis. REAL time: 12 secs > ERROR:Par:228 - PAR: At least one timing constraint is impossible to > meet > because component delays alone exceed the constraint. A physical > timing > constraint summary follows. This summary will show a MINIMUM net > delay for > the paths. Please use the Timing Analyzer (GUI) or TRCE (command > line) with > the Mapped NCD and PCF files to identify the problem paths. For > more > information about the Timing Analyzer, consult the Xilinx Timing > Analyzer > Reference manual; for more information on TRCE, consult the Xilinx > Development System Reference Guide "TRACE" chapter. > > Asterisk (*) preceding a constraint indicates it was not met. > This may be due to a setup or hold violation. > > -------------------------------------------------------------------------------- > Constraint | Requested | Actual | > Logic > | | | > Levels > -------------------------------------------------------------------------------- > NET "ETH_RXC_BUFGP" MAXSKEW = 2 nS | 2.000ns | 0.000ns | > N/A > -------------------------------------------------------------------------------- > * NET "ETH_RXC_BUFGP" PERIOD = 40 nS HIG | 40.000ns | 5.480ns | > 2 > H 14 nS | | | > > -------------------------------------------------------------------------------- > NET "ETH_TXC_BUFGP" MAXSKEW = 2 nS | 2.000ns | 0.000ns | > N/A > -------------------------------------------------------------------------------- > * NET "ETH_TXC_BUFGP" PERIOD = 40 nS HIG | 40.000ns | 2.033ns | > 1 > H 14 nS | | | > > -------------------------------------------------------------------------------- > TSTXOUT_ethernet = MAXDELAY FROM TIMEGRP | 10.000ns | 3.261ns | > 1 > "TXCLK_GRP_ethernet" TO TIMEGRP "PADS" 10 | | | > > nS | | | > > -------------------------------------------------------------------------------- > * TSRXIN_ethernet = MAXDELAY FROM TIMEGRP " | 6.000ns | 6.350ns | > 2 > PADS" TO TIMEGRP "RXCLK_GRP_ethernet" 6 n | | | > > S | | | > > -------------------------------------------------------------------------------- > TSCLK2CLK90_ddr_controller = MAXDELAY FRO | 2.875ns | 1.376ns | > 0 > M TIMEGRP "OPB_Clk_ddr_controller" TO TIM | | | > > EGRP "Clk90_in_ddr_controller" 2.875 nS | | | > > -------------------------------------------------------------------------------- > > 3 constraints not met. > INFO:Timing:2761 - N/A entries in the Constraints list may indicate > that the > constraint does not cover any paths or that it has no requested > value. > > Bertrand RousseauArticle: 82491
gja, Basically, I am using the fact that the IDT device is just a simple NMOS transistor, and since I know how that works (physically) I am ignoring the data sheet (as it is misleading in this case). I know that IDT does not support this from their data sheet specifications, and they actually called me to tell me that they would not support this. Odd. It works fine. They are sand-bagging their specifications like crazy here, and their parts work far better than the data sheet implies (in this circuit). Could be the loading (none), could be the voltages (less variation than what they spec), could be they don't want to support the application. Fine, call Xilinx. I'd much rather you call us than IDT. OK by me. We have built it, used it, tested it, and are still doing so. I know a lot of folks out there who have done likewise. Haven't heard a single complaint. Officially, the PCI specification does not allow any devices to be placed in series with a PCI compatible part. That is fine as well. Austin gja wrote: > Austin, > Maybe you can give me more insight to a problem I have with xapp646. The > note states that "Since the device is a set of series-connected NMOS > transistors, any voltage larger than a few hundred millivolts below the VCC > pin voltage will be cut off." > From reading the IDT appnotes and what I'm seeing on a circuit board, the > output will always be limited to less than VCC-1. With VCC at 3.3v as shown > in xapp646, under light loading, the output voltage is about 2.3v, and with > a 10k load, it's closer to 2v which means essentially no noise margin for > TTL. Look at figure 4 of http://www1.idt.com/pcms/tempDocs/AN_11.pdf or > figure 5 of http://www1.idt.com/pcms/tempDocs/quickswitch_basics.pdf > Do you think that I should be seeing around 2 to 2.3v output with the ckt > shown in xapp646? > > Dr, take a look at TI's sn74cb3t3384 or sn74cbtd3384c as well as some > appnotes on their site. > > > gja > > "Austin Lesea" <austin@xilinx.com> wrote in message > news:d3gogs$lr91@cliff.xsj.xilinx.com... > >>Dr, >> >>Spartan 2 will be around a long time. That we have demoted it from the >>limelight is a marketing issue (just so much shelf space for the new >>products to showcase). >> >>As you may be aware, we still provide the 3100A series of FPGAs, which are >>still supporting designs done 15 years ago! >> >>We discontinue devices once they are not able to be manufactured and sold >>economically. This means that there is little business, and the process >>used to make the chips has become obsolete at the fabrication facilities. >>We also may discontinue a particular part/package combination when that >>package is running at extremely low volumes or becomes difficult to >>procure. >> >>Since we are still making almost all of our FPGA products, I don't think >>you have anything to worry about with Spartan II. >> >>The original Virtex, and Spartan II are a lot like classic Coca-Cola -- >>they may never go away. >> >>However, the cost/function of newer devices is so much better than the >>older devices, that you may want to consider designing with the latest >>devices (at some point). >> >>The app notes we have published for 5V PCI details all of the tricks to >>make the latest 90nm devices work on the 5V PCI bus. (Xapp 646, 311) >> >>I hope this helps, >> >>Austin > > >Article: 82492
"Mark Jones" <abuse@127.0.0.1> wrote in message news:upWdnTeAPOxSqcDfRVn-sA@buckeye-express.com... > No, both pins are not bidirectional. Only the master device drives the SCK > line, and all slaves must leave their SCK's as input. Yes, they are. Slaves can pull down SCL to 'wait state' traffic to match their own speed. That's why it has to be open-collector. > Since SCK is never "tristated", you can just drive it as 0/1 output using > the > master device and omit the pullup resistor. No, this is seriously against the spec! You should never drive it active high. Otherwise slaves may be damaged when they try to wait-state the bus master. I've heard this bad advice before - people say they get away with it, but they cannot say it obeys the I2C spec.Article: 82493
On 13 Apr 2005 02:00:25 -0700, "Varun Jindal" <varunjindal@yahoo.com> wrote: >hello, > >i am using RLOC statement as .. > >//synthesis attribute rloc of d1 is X12Y11 > >As far as i understand, these X-Y co-ordinates specify the Slice >number. > >My first question is - how do i specify which of the two flip-flops(in >one slice) to use? Use a "BEL" attribute for that ... attribute "BEL" = "FFX" or "FFY" There are BEL attributes for the LUTs and carries ("F","G" and "XORF", XORG") too. Download and read the "Constraints Guide" for more info. - BrianArticle: 82494
In article <upWdnTeAPOxSqcDfRVn-sA@buckeye-express.com>, abuse@127.0.0.1 says... > praveen.kantharajapura@gmail.com wrote: > > Hi all, > > > > This is a basic qustion regarding SDA and SCL pins. > > Since both these pins are bidirectional these should pins need to be > > tristated , so that the slave can acknowledge on SDA. > > > No, both pins are not bidirectional. Only the master device drives the SCK > line, and all slaves must leave their SCK's as input. Not true, a slave device can extend a cycle through clock stretching and the only way to do that is for the slave device to be able to hold the clock line low. http://www.i2c-bus.org/clockstretching/ > > > > But i have seen in some docs that a '1' need to be converted to a 'Z' > > while driving on SDA and SCL, what is the reason behind this???? > > > > Thanks in advance, > > Praveen > > > As others have said, usually SCK and SDA have a 1-10k pullup resistor to Vdd. > This makes the signal a 1 while no device is pulling a pin low. So set the > output pin to a zero, and toggle it as being an input versus an output, to > generate your digital signal. > > Since SCK is never "tristated", you can just drive it as 0/1 output using the > master device and omit the pullup resistor. Of course, the master device must be > able to source and sink a few mA. The PIC line of microcontrollers have no > problem doing this; the Atmels probably work the same. > > Furthermore, if the Atmels have a pin which is "open collector output only", > then that would work for SDA without needing to tristate it. Just set it as an > output and "0" will pull SDA low, and "1" will release it (alowing the pullup > resistor to make SDA "1".) > > For power-hungry applications, you can increase the pullup resistors at the > expense of speed and noise rejection. 100k works well for shielded, > battery-powered applications. > > Cheers, > MCJ >Article: 82495
Antti Lukats wrote: > "Stephane" <stephane@nospam.fr> schrieb im Newsbeitrag > news:d3j43r$e32$1@ellebore.extra.cea.fr... > >>Fcclk max = 100 MHz >> >>bitstream of VLX25 = 7.4 Mbit >> >>SelectMAP port width = 32 bits > > > NO, 8 bits > > I don't agree with you: here are the 32 configuration data bits: PAD209 X27Y127 IOB_X1Y127 F14 1 IO_L1P_D31_LC_1 PAD210 X27Y126 IOB_X1Y126 F13 1 IO_L1N_D30_LC_1 PAD211 X27Y125 IOB_X1Y125 F12 1 IO_L2P_D29_LC_1 PAD212 X27Y124 IOB_X1Y124 F11 1 IO_L2N_D28_LC_1 PAD213 X27Y123 IOB_X1Y123 F16 1 IO_L3P_D27_LC_1 PAD214 X27Y122 IOB_X1Y122 F15 1 IO_L3N_D26_LC_1 PAD215 X27Y121 IOB_X1Y121 D14 1 IO_L4P_D25_LC_1 PAD216 X27Y120 IOB_X1Y120 D13 1 IO_L4N_D24_VREF_LC_1 PAD217 X27Y119 IOB_X1Y119 D15 1 IO_L5P_D23_LC_1 PAD218 X27Y118 IOB_X1Y118 E14 1 IO_L5N_D22_LC_1 PAD219 X27Y117 IOB_X1Y117 C11 1 IO_L6P_D21_LC_1 PAD220 X27Y116 IOB_X1Y116 D11 1 IO_L6N_D20_LC_1 PAD221 X27Y115 IOB_X1Y115 D16 1 IO_L7P_D19_LC_1 PAD222 X27Y114 IOB_X1Y114 C16 1 IO_L7N_D18_LC_1 PAD223 X27Y113 IOB_X1Y113 E13 1 IO_L8P_D17_CC_LC_1 PAD224 X27Y112 IOB_X1Y112 D12 1 IO_L8N_D16_CC_LC_1 PAD225 X27Y79 IOB_X1Y79 AA14 2 IO_L1P_D15_CC_LC_2 PAD226 X27Y78 IOB_X1Y78 AB14 2 IO_L1N_D14_CC_LC_2 PAD227 X27Y77 IOB_X1Y77 AC12 2 IO_L2P_D13_LC_2 PAD228 X27Y76 IOB_X1Y76 AC11 2 IO_L2N_D12_LC_2 PAD229 X27Y75 IOB_X1Y75 AA16 2 IO_L3P_D11_LC_2 PAD230 X27Y74 IOB_X1Y74 AA15 2 IO_L3N_D10_LC_2 PAD231 X27Y73 IOB_X1Y73 AB13 2 IO_L4P_D9_LC_2 PAD232 X27Y72 IOB_X1Y72 AA13 2 IO_L4N_D8_VREF_LC_2 PAD233 X27Y71 IOB_X1Y71 AC14 2 IO_L5P_D7_LC_2 PAD234 X27Y70 IOB_X1Y70 AD14 2 IO_L5N_D6_LC_2 PAD235 X27Y69 IOB_X1Y69 AA12 2 IO_L6P_D5_LC_2 PAD236 X27Y68 IOB_X1Y68 AA11 2 IO_L6N_D4_LC_2 PAD237 X27Y67 IOB_X1Y67 AC16 2 IO_L7P_D3_LC_2 PAD238 X27Y66 IOB_X1Y66 AC15 2 IO_L7N_D2_LC_2 PAD239 X27Y65 IOB_X1Y65 AC13 2 IO_L8P_D1_LC_2 PAD240 X27Y64 IOB_X1Y64 AD13 2 IO_L8N_D0_LC_2 >>so the minimum reconfiguration time for this part should be a little bit >>more than 7.4/100/32 = 2.3ms >> >>correct? > > > NO, see above > > >>Is the same CCLK freq sustainable thru ICAP? > > > usually is ICAP way slower than max CCLK Very interesting! Any figure? > > Antti > >Article: 82496
I'm sure others have this problem ... Is there a tool that'll let one view and hopefully print a schematic done in the old Xilinx F2.1i schematic tool? The new stuff doesn't want to know about the old stuff, and worse is that you can't even install 2.1i on an XP machine. (Yeah, that'll teach me to upgrade.) I don't want to do anything with this schematic other than view it. I'm doing a new board sorta based on an old design, and the new design will of course be in VHDL rather than as a schematic. Ideas? -aArticle: 82497
"Laurent Gauch" <laurent.gauch@DELETEALLCAPSamontec.com> wrote in message news:425D2A70.8020809@DELETEALLCAPSamontec.com... > > Sorry but a > LUT is : Logic Unit Table -> a LUT can only do combinatory works (AND OR > XOR ...). > Hey Laurent, Is that the Academy Français's version? In English speaking countries LUT means Look Up Table! ;-) Cheers, Syms.Article: 82498
Hi Austin, > Officially, the PCI specification does not allow any devices to be > placed in series with a PCI compatible part. That is fine as well. Hmmm... I thought that PCI-Compliant was "nothing in series", and PCI-Compatible was "whatever you need to do to make it work". Apart from that, I second all your statements there. Plus, I would like to repeat here, TI seems to be more open to use for this type of application of their sn74cbtd3384c parts. Best regards, BenArticle: 82499
Andy Peters wrote: > I'm sure others have this problem ... > > Is there a tool that'll let one view and hopefully print a schematic > done in the old Xilinx F2.1i schematic tool? The new stuff doesn't > want to know about the old stuff, and worse is that you can't even > install 2.1i on an XP machine. (Yeah, that'll teach me to upgrade.) > > I don't want to do anything with this schematic other than view it. > I'm doing a new board sorta based on an old design, and the new design > will of course be in VHDL rather than as a schematic. > > Ideas? > > -a > Aldec's tool Active-HDL has capability of importing the Foundation schematics and entire projects. The import utility not only allows printing, but also importing these files into their format, maintain and even convert into an HDL design that can be targeted for any family/device. They show this capability on their website: http://downloads.aldec.com/Previews/Presentations/IP_Core.html eg
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z