Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Jim Thomas wrote: > rickman wrote: > > > > I am finding that. I just strikes me as odd that I can buy exactly one > > memory module at a quarter or even an eighth of what the equivalent > > SDRAM chips will cost me if I buy 1000 of them. But then electronic > > purchasing has never been logical.. :) > > Hey Rick, > > Any reason you couldn't just plunk down a DIMM socket? > I suppose form factor might be one. > Or even an SODIMM skt. They are surface mount and lie horizontally so there's room for some SMT parts underneath the SODIMM. The skts are a couple of $ (much cheaper than they used to be) but the SODIMMs themselves are a little more expensive per MB than DIMMs. We've faced this problem of getting individual SDRAMs ourselves but in the other form of: DIMM lead time = tomorrow afternoon, SDRAM lead time = 6-8 weeks for small(ish) quantities. It seems that unless you're buying 100K/month the distis aren't interested ... and if you want something non-standard like x32 configuration you can just forget it.Article: 43426
Lähteenmäki Jussi wrote: > <snip> Finally, I don't think you > should ever use tri-stated on-chip buses, they are here only to > interface with the outside world... > Sounds a bit harsh. Internal 3-states, when available, can be very efficient. Why ban something on "religious" grounds? Peter Alfke, Xilinx ApplicationsArticle: 43427
Peter, It can seem that way sometimes. Let me see if I can add something to this. Tri-state internal resources were very useful. Notice the past tense. In Virtex, the tri-state is implemented as logic, rather than as a true tri-state, with the signals being separated into two directions (receive and transmit) and logic to fool everyone into thinking it is still a tri-state driver. The reason for this was tri-state just did not scale with the rest of the interconnect, and it would be far too slow to be useful. Well, once we have taken that step to just implementing tri-state from logic, the end of tri-state is near! After all, once it is done by the hardware to fool the user, then one can do it with the synthesis, and finally, no tri-state at all is actually required. Virtex II cut the number of "tri-state" resources in half per CLB, and future generations may elliminate it altogether. What is interesting about this is that it will never really go away, as one will "instantiate" it, and the tools will implement it in a more efficient manner, in logic. I was a little nostalgic about this for a few moments, and then I thought, why get religious about it? Austin Peter Alfke wrote: > Lähteenmäki Jussi wrote: > > > <snip> Finally, I don't think you > > should ever use tri-stated on-chip buses, they are here only to > > interface with the outside world... > > > > Sounds a bit harsh. Internal 3-states, when available, can be very > efficient. > Why ban something on "religious" grounds? > > Peter Alfke, Xilinx ApplicationsArticle: 43428
"Lähteenmäki Jussi" <jusa@cc.tut.fi> wrote in message news:acda8u$r53$1@news.cc.tut.fi... > In comp.lang.vhdl Tom Hawkins <thawkins@dilloneng.com> wrote: > I remember reading somewhere, that major target for ASIC and SOC designs > is in the telecommunications. Therefore, the power consumption is a big > part in any design and the easiest way to get it down is to drop the > clock frequency. So, I think in any modern design there are several > clock domains included. Another question is the availability to stop the > clock in inactive parts. This also adds complexity to the design. Not > taking any wild guesses here, just stating my point of view. That is a good point about power consumption. Is stopping the clock of a major subsystem equivalent to globally enabling the entire subsystem? In both cases, the only gates that change state are those between the system's inputs down to the first set of registers. If this true and we assume all the components in a SOC run well at the same clock rate, then the entire (internal) SOC design can have single clock domain. But maybe this is a bad assumption. How often to all/most of the cores in a SOC run at the same rate? > > Also, I think people are using FIFOs too easily to connect their clock > domains. I understand the need in your case as you are doing DSP and > such, but when there is just simple register R/W -operations between two > clock domains, FIFOs are a bit extreme. Finally, I don't think you > should ever use tri-stated on-chip buses, they are here only to > interface with the outside world... > Agreed about FIFOs. Yes, we use them for streaming data into our DSPs. For simpler applications we resort to pairs of registers handshaking with asynchronous clears. Regards, Tom > regards, > juza > > : I'm just trying to get an idea what percentage of logic > : designs are synchronous single clock domain > : with strict I/O, meaning no tristates or shared buses. > > : Sure almost every complete chip design has several clock > : domains and most have some form of a shared inout bus. > : But I am curious what percentage of a design is purely > : synchronous single clock without shared buses. > > : The last two years I have worked for an FPGA design firm > : that specialized in image processing cores. All of our > : individual cores were built synchronous, single clock. > > : At times when we needed to integrate several cores together > : and connect to external devices, we would create a top level > : design unit that contained all the shared buses and tristates. > : If we needed to move data across clock domains we would drop > : in a multi-clocked FIFO between two design components. > : But still all the major design units were synchronous, > : single clock logic void of bidirectional buses--accounting for > : about 95-98% of a total design. > > : Is this pretty typical? I'm curious what percentages are > : common within the ASIC and SoC design communities. > > : Thanks for your input. > > : Tom > > > > -- > JuzaArticle: 43429
hi , my question is this : how many of the different phase clock signals (0,90,180 ...) can I use of a single DCM unit . thanks in advance, Eyal.Article: 43430
"Cyrille de Brébisson" <cyrille_de-brebisson@hp.com> schrieb im Newsbeitrag news:acdiue$n87$1@web1.cup.hp.com... > Hello, > > I am trying to make a simple design in which I have 16 32 bit wide input > that I need to put in a buffer and then deliver back as 8*64 bit output. > > I was wondering if it was better to: > > implement a 16*32 shift register, shift my data in, and after 16 cycles, > output the 8*64 bits, or > store each 32 bit value directly in it's final position in the 16*32 > register using a state machine to know where to store each data. In a Xilinx device, just use a SRL16. This gives you 32 16-Bit shift registers plus output MUX. You only need a additional 8 bit 4:1 MUX after this. -- Regards FalkArticle: 43431
If you're using a device family without flexible embedded RAM configurations, the choice is probably arbitrary. If you're using the Xilinx CLB SelectRAM, the 64-wide structure gets my vote. On the write you enable each half of the 64 bit value on alternate clocks. On the read you read out all 64 bits in each clock cycle. The 32 bit wide arrangement would need to read two values from one shift register element every read clock - not a friendly configuration. Keep in mind that the 64-wide structure can also be implemented as a shift register. "Cyrille de Brébisson" wrote: > Hello, > > I am trying to make a simple design in which I have 16 32 bit wide input > that I need to put in a buffer and then deliver back as 8*64 bit output. > > I was wondering if it was better to: > > implement a 16*32 shift register, shift my data in, and after 16 cycles, > output the 8*64 bits, or > store each 32 bit value directly in it's final position in the 16*32 > register using a state machine to know where to store each data. > > Which one do you think is the 'cleanest' design, the simplest design, which > one would you recomand. > > regards, CyrilleArticle: 43432
Peter Alfke wrote: > Lähteenmäki Jussi wrote: > > > <snip> Finally, I don't think you > > should ever use tri-stated on-chip buses, they are here only to > > interface with the outside world... > > > > Sounds a bit harsh. Internal 3-states, when available, can be very > efficient. > Why ban something on "religious" grounds? > > Peter Alfke, Xilinx Applications Question: Are Xilinx devices robust against the possibility that 2 TBUFs drive the same horizontal line to opposite values for an extended period of time, say several 10s or 100s of msec ? Even if they are isn't contention noise likely to affect things like DLL/DCMs (that would be a nasty bug to try & find) ? Its this question and its dual - what happens when there are no drivers enabled and the line floats - that makes ASIC vendors dislike tristates and I'd guess the OP is bascally an ASIC guy. The issue here is that the only real way to ensure the ``no contention'' condition to the level needed to satisfy an ASIC house is by using formal verif techniques on the source code. All that said I tend to agree with Peter that they are an efficient, LUT minimising, solution in some circumstances.Article: 43433
So that would be this address : http://support.xilinx.com/xlnx/xil_prodcat_landingpage.jsp?title=ISE+WebPack -Then click on 'Download ISE WebPACK' (or register first if you don't have a Xilinx login) -Click on the CPLD link -Change the version to 3.3WP8.1 -Click Download button -Download the XPLA Programmer module As Jim said in his earlier post - the XCR5xxx CPLD families were obsoleted by Xilinx 1 to 2 years ago, and you need to use the older programming tool in order to program these devices. For information on the obsoleted CoolRunner devices, go here: http://www.xilinx.com/xlnx/xil_prodcat_product.jsp?title=coolpld_page Regards, Arthur Neil Glenn Jacobson <neil.jacobson@xilinx.com> wrote in message news:<3CE96BD3.7B0BD423@xilinx.com>... > You will need to use the XPLA Programmer which you can download from the > link below > > Xilinx Home : Products and Solutions : Design Resources : Free ISE WebPACK > > Derren Crome wrote: > > > Hi, > > > > How do I program a XCR5064 coolrunner? The ISE4.2i IMPACT tool detects > > there is a JTAG device but cannot identify part (IDCODE). Tried earlier > > versions of the JTAG programming software with same results. > > > > Any ideas? > > > > Thanks > > > > -- > > Derren CromeArticle: 43434
There is a bug in the flow... if you look in your project directory for a .tmv file, you will see that this is a somewhat readable ASCII test vector file. Change the input and output port names to match the names in the fitter report. I believe what is going on is that the Jedec tool is case sensitive, and the proper case is lost during the creation of the TMV file - and as a result, the Jedec tool can't figure out that DATAIN == datain and therefore you get XXXX's in your Jedec file. This functional test problem doesn't have anything to do with the BSDL errors you are getting. You could have the problem as described in Solution record 12737 (http://support.xilinx.com/xlnx/xil_ans_display.jsp?getPagePath=12737). Regards, Arthur Joerg Schneide <JSchneide@t-online.de> wrote in message news:<3CEA1281.B464CCDE@t-online.de>... > Hello, > > I have problems with the test vectors in an ABEL design for > a XC9572. They don't appear in the Jedec file, there are > only X's: "V0001 XXXXXXX...". > I have enabled the "Jedec Test Vector File" checkbox in the > "Generate Programming File" properties, there are no lower case > signal names. > > How to put my vectors in the Jedec file? > > When I try to program the device with the "Functional Test" > setting in iMPACT, there is an error message related to > missing BSD-Files (so told me the Answer Database at Xilinx) > but they are present and I downloaded the newest. > > Is it possible that this error occurs due to the missing > test vectors? If not, how to fix this one? > > I am using WP Project Navigator 4.1WP2.x, build+E-32+0 > and the related iMPACT version on Win98. > > Thanks, > > Joerg.Article: 43435
Austin Lesea wrote: > Peter, > > It can seem that way sometimes. Let me see if I can add something to > this. > > Tri-state internal resources were very useful. Notice the past tense. > In Virtex, the tri-state is implemented as logic, rather than as a true > tri-state, with the signals being separated into two directions (receive > and transmit) and logic to fool everyone into thinking it is still a > tri-state driver. > That's my contention question answered, unless you've taken ``fooling'' to the extreme of modeling contention noise :-)). More seriously what does happen now in the cases where (a) there's more than one "driver", (b) there are no "drivers" ? Are there any circumstances where RTL simulation passes but the actual device doesn't work ?Article: 43436
Austin Lesea (Xilinx) wrote: > > Tri-state internal resources were very useful. Notice the past tense. > > The reason for this was tri-state just did not scale with the rest of > the interconnect, and it would be far too slow to be useful. Given who Austin works for, it's not surprising that his comments are pretty FPGA specific ;-) For those not listening on c.a.fpga, tristates on-chip in ASICs remain very useful for intermodule comms: tristate busses are narrower than they would otherwise have to be, thereby saving a bundle of interconnect, area (=cost) and likely power and EMC into the bargain. Speed may not be as high as separating out the busses, due largely to delays inside the buffer circuits themselves, but delays in this sort of situation are increasingly dominated by signal propagagation down the distributed R-L-C of the wires anyway. However, in answer to Tom's original question, it's probably reasonable to assume that clever intermodule comms using tristates etc only appear at the toplevel of current ASIC designs, and that the modules themselves are likely to have simple internal comms and fairly simple clocking arrangements. cheers, ChrisArticle: 43437
Rick Filipkiewicz wrote: > More seriously what does happen now in the cases where (a) there's more than > one "driver", (b) there are no "drivers" ? Are there any circumstances where > RTL simulation passes but the actual device doesn't work ? I thought I read this stuff straight from the datasheet but I may be wrong. a) more than one driver will result in a logic low if any of the drivers are low. b) no drivers will result in a logic high. Ah, found it. In the Spartan-II datasheet, for instance: <quote> BUFTs When all BUFTs on a net are disabled, the net is High. There is no need to instantiate a pull-up unless desired for simulation purposes. Simultaneously driving BUFTs onto the same net will not cause contention. If driven both High and Low, the net will be Low. </quote> The RTL simulation will provide Xs wherever you have contention and don't have the simulation set up to resolve the logical outcome. I was using tristates for a while since it appeared I could get better combinatorial results to my IOBs from nearby logic. Once the new timing files came in, the delays were in favor of the standard logic route. To get my *post place and route* Verilog simulations to come out without the contention, I had to replace the X_TRI module with my own version with a "bufif1 (weak1,strong0) (O, I, CTL);" instantiation to resolve the multiple drivers. With the new timing I moved my logical AND from the tristates to the synchronous reset logic in the IOB flop. I love pushing logic around to places it doesn't normally belong. :-)Article: 43438
Ray Andraka wrote: > Thing is SPDT switches are more expensive, as are extra pins. If you do have the > unlikely luxury of a SPDT switch and pins to spare, a simpler set up is to use the > switch ...snip You can avoid the extra pins by internally driving an output pin from its own input. That make this pin into a latch. Externally connect it to the common terminal of the switch, and connect the other two to Vcc and ground. Reduce the output drive to the lowest value. This may appear brutal, but the contention lasts only a nanosecond or two... Peter AlfkeArticle: 43439
Rick, Other than the simulation issue of displaying a 'x' and 'z' at the 'right' times, there is no real issue here either. Of course in logic, the levels are always 0, or 1, so inferring when they might be 'tri-state' or in contention is done by the simulation tools. If the simulation shows no problems, logic is more likely to work than tri-state. Austin Rick Filipkiewicz wrote: > Austin Lesea wrote: > > > Peter, > > > > It can seem that way sometimes. Let me see if I can add something to > > this. > > > > Tri-state internal resources were very useful. Notice the past tense. > > In Virtex, the tri-state is implemented as logic, rather than as a true > > tri-state, with the signals being separated into two directions (receive > > and transmit) and logic to fool everyone into thinking it is still a > > tri-state driver. > > > > That's my contention question answered, unless you've taken ``fooling'' to > the extreme of modeling contention noise :-)). > > More seriously what does happen now in the cases where (a) there's more than > one "driver", (b) there are no "drivers" ? Are there any circumstances where > RTL simulation passes but the actual device doesn't work ?Article: 43440
Eyal, Up to four outputs may be routed to BUFGs from a DCM. Austin Eyal Shachrai wrote: > hi , > > my question is this : how many of the different phase clock signals > (0,90,180 ...) can I use of a single DCM unit . > > thanks in advance, > Eyal.Article: 43441
In article <3CE978C7.14011AE8@yahoo.com>, rickman <spamgoeshere4@yahoo.com> wrote: >Traveler wrote: >> Thanks for the informative reply. Here's a brief explanation of the >> neurons and their connections. >> >> Neurons >> >> Most of my neurons can be loosely compared to AND gates. I add up the >> strength (16-bit integer) of all the input synapses that fire at a >> given time. If it is over 90% of the total strength of all inputs, >> the neuron fires. The connections that contributed to firing are then >> strengthened and the others are weakened. Some of my neurons use a >> fixed time delay on all connections except one. >> >> Most neurons have a single input synapse that I call the master >> synapse. All other inputs synapses are slaves. The neuron cannot >> fire unless the master synapse fires. So there is no need to do an >> input summation unless a signal arrives at the master synapse. >> >> Command Neurons >> >> These neurons have the highest number of input synapses. A command >> neuron fires every time one of its input signals arrives. Command >> signals will have a special remedial input whose signal strongly >> suppresses the most recently fired synapse. >> >> Searching >> >> The system needs to periodically (every second or so) make random >> connections between neurons in a layer to neurons in a downstream >> layer. All connections are given a low initial strength. Connections >> persist indefinitely unless neural activity weakens their strength >> below what I call a disconnect level, at which time they are >> disconnected. >> >> Timing and Signal Flow >> >> The signal arrival time at the master synapse is the reference time. >> Ideally, the temporal resolution should be as high as possible. But >> since I want to emulate biological neurons, I will be satisfied with a >> 1 millisecond resolution. Signal flow through the system must remain >> constant because timing is critical. The making and breaking of >> connections must not disrupt signal flow. This seems to require some >> sort of master clocking mechanism. >> >> Layers >> >> There is only one type of neuron in any given layer. >> >> Temporal Intelligence: >> http://home1.gte.net/res02khr/AI/Temporal_Intelligence.htm > >Your neurons are quite complex having a weight for every synapse input. >This requires that each input have an N bit register for the weight >along with a mechanism for updating the weight on firing etc. Each of >these weights for synapses that have fired must be summed. BTW, this is >nothing like an AND gate, this is purely an arithmetic operation. >Although the "Master Synapse" is processed with an AND gate. > >I am still not clear if the weighting you have described is the same as >the connect/disconnect mechanism you described in the earlier post. If >so, the structure is not too much more complex that what we originally >thought. If this weighting is in addition to the connect/disconnect >mechanism, then there is a lot more logic involved that I had originally >thought. > >Your problem is basically of complexity order M where M is the total >number of synapse inputs to all of the neurons. If you want 200,000 >neurons and 200 synapses each, you need to evaluate 40,000,000 synapses >per ms or 40,000,000,000 per sec. Assuming a clock of 100 MHz, this >requires 400 processing units to achive that speed. With only 168 block >RAMs to work with in the largest FPGA, you will fall short by a factor >of 3. The speed can be met by reducing the number of synapses and/or >neruons. Note that only a small subset of neurons and synapses will be active (firing) at any one time. This should drastically cut down on the speed requirement. >But there is a tougher problem. Your arithmetic synapses require the >storage of 40 million 8 bit weights (I am assuming this size). The 168 >block rams can only store 344,000 8 bit numbers. Off chip storage is >difficult due to the high bandwidth required (320 Gbps each read and >write). To use external DDR-SDRAM, it would require 80 memory chips at >32 bits and 266 MHz. To bring this to a manageable size you would need >to cut your total synapse size by a factor of about 4 (100 k neurons, >100 synapses each). This would require 20 memory chips, 32 bits each, >266 MHz and 640 data IOs. Or 10 memory DIMMs could be used. RDRAM >RIMMs run at up to 800 MHz, 18 bits per module. Using 20 modules would >provide 256 Gbits/s on a 320 bit data bus or just less than half the bit >rate required. This of course assumes that the RDRAM electrical spec is >supported by the VII chips. > >So it appears that in the Xilinx VII FPGAs your algorithm would be >memory bound and not speed bound. External memory is possible, but not >at full speed. The use of standard DDR memory or modules would >facilitate the solution, or RDRAM might allow it to operate closer to >full speed. > >But don't think this is an easy hardware solution. Connecting that much >memory in parallel to one chip would be a difficult layout task. >Initially you might want to work with a much smaller number of neurons >to get the approach and design details ironed out. Work within a single >chip on an off the shelf board initially. Then plan your custom board >based on your results. > >Or as I suggested before, you could do the same amount of processing in >software on 100 DSPs. This is a lot, but they will be much more >flexible and should be simpler to optimize. I am grateful that you have taken the time to help me with this problem. Your comments are invaluable to me. I must weigh the pros and cons of using a pure software solution versus a hybrid software/hardware solution. I think I'll take your advice and try a simple implementation at first as a feasibility study. Again, I thank you for your help. Temporal Intelligence: http://home1.gte.net/res02khr/AI/Temporal_Intelligence.htmArticle: 43442
Would an input with a KEEPER accomplish your goal? The KEEPER component or constraint is supported in Virtex(-E), Spartan-II(E), Virtex-II(Pro), and CoolRunner-II. It's always nice to save pins. Peter Alfke wrote: > Ray Andraka wrote: > > > Thing is SPDT switches are more expensive, as are extra pins. If you do have the > > unlikely luxury of a SPDT switch and pins to spare, a simpler set up is to use the > > switch ...snip > > You can avoid the extra pins by internally driving an output pin from its own input. > That make this pin into a latch. Externally connect it to the common terminal of the > switch, and connect the other two to Vcc and ground. > Reduce the output drive to the lowest value. > This may appear brutal, but the contention lasts only a nanosecond or two... > > Peter AlfkeArticle: 43443
If you are using Virtex-II, and you have a BlockRAM to spare, you can just write into one port 32 bits wide, and read out from both ports in parallel ( one port addressing even locations, the other one odd locations) totally 64 bits wide. The available depth ( 512 locations of 32 bits) is an overkill, but who cares when you have a BlockRAM to spare...:-) Peter Alfke, Xilinx Applications ============================= "Cyrille de Brébisson" wrote: > Hello, > > I am trying to make a simple design in which I have 16 32 bit wide input > that I need to put in a buffer and then deliver back as 8*64 bit output. > > I was wondering if it was better to: > > implement a 16*32 shift register, shift my data in, and after 16 cycles, > output the 8*64 bits, or > store each 32 bit value directly in it's final position in the 16*32 > register using a state machine to know where to store each data. > > Which one do you think is the 'cleanest' design, the simplest design, which > one would you recomand. > > regards, CyrilleArticle: 43444
Greetings All, I would like to interface an 8 bit signal, carried as 8x5V RS422 differential pairs, into a Virtex 2. The data rate is < 15Mhz, 50% duty cycle. Past experience sugests that the use of termination resistors by the reciever is not necessary - i.e. there is no adverse effect if they are not used. One option is to use dedicated 5V differential -> 3.3v single ended conversion chips, but so far I have only been able to find an SMD chip for this, and I'd rather not have to go there... So, the question is: Can I use external clamp diodes, and LVDS IOBs, or is there some other way of achieving this with clamps/resistors, or should I just bight the bullet and learn to solder SMD? Cheers, Chris SaunterArticle: 43445
> Austin Franklin wrote: > > > > > > Hi Kevin, > > > > For a target, tsu should not matter, as you should register all the PCI I/O > > in the IOBs. It is only for a master that you really need to use some of > > the signals raw, and I/O timing becomes an "issue". > > > > Austin > > > Although I no longer have Tsu related timing issues in my PCI IP > core at least for 33MHz PCI, I believe even in a target only PCI > interface, some signals will have to used right off the PCI bus > unregistered. > The first example I can think of is, in a backoff state, the target will > have to reference an unregistered FRAME# to see if the initiator (bus > master) is ending the transaction. > The second example will be to compare the parity of AD[31:0] and > C/BE#[3:0] with PAR one clock cycle after an address phase. > The third example will be during a no-wait burst cycle where FRAME# will > have to be monitored every clock cycle to see if the transfer is ending. > In my implementation, other than address decode, I really don't use the > registered version of the PCI signals, but I can still easily meet Tsu < > 7ns in a Spartan-II-5. Hi Kevin, I don't believe any of your issues are "required" for a basic target. 1) backoff - for non-burst, I do not believe that is an issue 2) AD/CBE doesn't have to be done on raw signals. You pipe them so they line up correctly. 3) FRAME/no-wait burst - again, for a non-burst target, this is not an issue. Regards, AustinArticle: 43446
Christopher, The LVDS differential receivers will work with full swing signals, but they won't meet he 420 MHz speed specification that way. Best is to use the voltage swing it was designed for: 400 mV peak to peak. Since Virtex II also has clamp diodes to Vcco and ground, operating it at 3.3 V Vcco will clamp a 5V signal to a diode drop above Vcco, so some series resistance is required, either in the driver, or physically separate. I suggest a resistor network to do the job for you. As for soldering SM parts, get a good magnifying glass setup, and a good solder station, a pair of tweezers, another soldering station (so you can heat both ends if you need ot pull one off -- this works really well, you just need three hands .... I place the tweezers in my teeth to flip the SM part off if I soldered the wrong one on there the first time), and some small diameter solder. It is just a technical skill, not unlike playing a musical instrument. Practice...... Austin Christopher Saunter wrote: > Greetings All, > I would like to interface an 8 bit signal, carried as 8x5V > RS422 differential pairs, into a Virtex 2. The data rate is < 15Mhz, 50% > duty cycle. Past experience sugests that the use of termination resistors > by the reciever is not necessary - i.e. there is no adverse effect if they > are not used. > > One option is to use dedicated 5V differential -> 3.3v single ended > conversion chips, but so far I have only been able to find an SMD chip > for this, and I'd rather not have to go there... > > So, the question is: > Can I use external clamp diodes, and LVDS IOBs, or is there some other > way of achieving this with clamps/resistors, or should I just bight the > bullet and learn to solder SMD? > > Cheers, > Chris SaunterArticle: 43447
Hi, > There may also be soft core starting points for this, plus you > have reference silicon to compare with :-), and the option of a > hard core, with FLASH, which is likely to be more ecconomical > than a soft core. > > -jg I think in prototypes or small production the soft core is more economical why? - Is possibly unification of the device hardware and change it by soft - At first point reduce cost of the board design and manufacturing - Reduce of the around processor chips obtain cost - Is possibly to design a new commands or hardware processor - Project of the board is more simplicity because all chips around processor are in FPGA structure - FPGA chips have more pins than typical hardware processor. It provides to ease design of device. - Start to work a new device is quickly - Is possibly to design a new architecture of device e.g. delta-sigma DAC converter which in traditional technique require many chips or hard (not flexibility) solution by hard core DAC chips - Generally soft core grows freedom of the desing process and reduces cost of the activate a new device One disadvantage is difficulty to protect of the project copy. JanuszRArticle: 43448
You still have the problem of requiring an SPDT switch. That rules out keypads, membrane switches and most other low cost switches. A debounce circuit based on time since first transition is not very big in the context of modern FPGAs. Peter Alfke wrote: > Ray Andraka wrote: > > > Thing is SPDT switches are more expensive, as are extra pins. If you do have the > > unlikely luxury of a SPDT switch and pins to spare, a simpler set up is to use the > > switch ...snip > > You can avoid the extra pins by internally driving an output pin from its own input. > That make this pin into a latch. Externally connect it to the common terminal of the > switch, and connect the other two to Vcc and ground. > Reduce the output drive to the lowest value. > This may appear brutal, but the contention lasts only a nanosecond or two... > > Peter Alfke -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 43449
Austin Franklin wrote: > > > Hi Kevin, > > I don't believe any of your issues are "required" for a basic target. > Well, you didn't say "for a basic target" initially. I was thinking of a high performance PCI interface just like commercial PCI IP cores that do no-wait burst transfers. > 1) backoff - for non-burst, I do not believe that is an issue > Although I still believe a raw (unregistered) FRAME# has to be monitored to correctly relinquish the bus. > 2) AD/CBE doesn't have to be done on raw signals. You pipe them so they > line up correctly. > That's what I do in my implementation. Because of that, it does medium DEVSEL# decode. > 3) FRAME/no-wait burst - again, for a non-burst target, this is not an > issue. > Yes, I agree that a raw FRAME# doesn't have to monitored in a no-burst target implementation, but I was thinking of a no-wait cycle burst implementation. I have to admit, if the original poster of this posting was only interested in getting his PCI board just to work, then the PCI interface implementation should be kept very simple, therefore, it probably makes sense to avoid using unregistered signals as much as possible. But I still think he will be much off using Spartan-II instead of XC4000XL because the LUTs are much faster. Kevin Brace (In general, don't respond to me directly, and respond within the newsgroup.)
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z