Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Jim Granville wrote: > Peter Alfke wrote: > > There are two directions: FPGA to TTL, and TTL to FPGA > > Today, all the FPGAs I know still support 3.3 V as I/O supply voltage. > > (That may change in the future, for 3.6-V tolerance is not natural or > > easy in the best and fastest processes) > > TTL-to-FPGA: Old bipolar TTL generally stayed 1 or 2 diode drops below > > Vcc, but CMOS variants can swing all the way to Vcc=5.5Vmax. Most FPGAs > > do not tolerate >3.6 V on their pins, but most also have a clamp diode > > to their own Vcc. If you can rely on that diode, then use 2.5 V or 3.3 > > V as Vccio and put a series resistor of 100 or 220 Ohm between the FPGA > > and TTL pin, to limit any clamp current. > > > > FPGA-to-TTL: Usually no problem, since TTL sees anything above 2.4 V as > > High. There are, however, "TTL" derivatives with CMOS input thresholds > > that might be up to 3.5 V. In that case, you should 3-state the FPGA > > output, and use a pull-up resistor to 5 V (relying on the clamp diode > > to the 3.3 V Vcco. > > This is a slow pull-up, and there is a trick to temporarily enable the > > active pull-up to generate the first 2 V of voltage swing. > > There is also a growing range of Dual-Vcc Translators, to service this > market that refuses to die... > TI has quite a number now, and I see Philips following them. > > You need dual supply, if power supply drain is any sort of a convern.The > Clamp diode tricks above are simple, but do not do good things to the > power drain figures..... > > > > 5V should today be considered an obsolete and awkward supply voltage, > > although it has served us well for 40 years.( In a few years, 3.3V will > > cause the same grief...) > > Sorry Peter, but much as the FPGA sector wants 5V to go away, > it's still here. In fact, the newest devices from Infineon and Freescale > have 5V ports ! > > Yes, they have lower voltage cores, but that is hidden from the > designer. ie, the Silicon vendor takes the trouble! > > ISTR the Freescale one impressed me, as it appeasrs to not need a core > decoupling pin - not sure how they managed that. I believe some of the smaller MCUs have onchip decoupling. They could also have regular caps inside the package, I'm not sure how much cost that adds but for the big powerpcs with megabytes of flash it can't be much. > > Why ? - noise immunity, ease of interface : have you ever tried to > find a power MOSFET that can be driven from 3.3V ? > I'm sure that the 5v output come at a cost, it's just that chips designed for applications that require 5v drive to e.g. mosfets, the cost is less than having to add external circuitry. I'm sure that if the main application for FPGAs were automotive they would have 5v IOs as well -LasseArticle: 97726
hello i plan on synthesizing miniuart (opencores.org) onto a spartan 3 demo board. but then i dont know how to test it. how do i configure/run a terminal console in windows to talk with my board? thanks, PeterArticle: 97727
> i plan on synthesizing miniuart (opencores.org) onto a spartan 3 demo > board. but then i dont know how to test it. how do i configure/run a > terminal console in windows to talk with my board? > Hyperterminal ?Article: 97728
Isaac Bosompem wrote: > logjam wrote: > > I want the ability to build the whole computer using TTL logic, but > > also put it in an FGPA. I'm learning VHDL as I go. Since the code is > > generated from my TTL schematic, I can test the giant circuit before I > > produce a PCB and solder hundreds of chips. > > > > Just a thought, but wouldn't the delay using cascading stages without a > > clock take just as much time as if you used a clock? Instead of using > > the same stage over and over again its just duplicated? I think what I > > will do is use the 64bit ALU that supports subtraction and addition, > > throw in two shift registers, and a state machine to control timing. > > The thing is though, that this method will use an exorbitant amount of > hardware. This will also result in a long path and will be very slow > due to propogation delay. On top of that it will depend on your logic The long path shouldn't be any longer than 64 times the length of the short path. The latency wouldn't change much, but in a pipelined environment, the rate could change a lot. If one knows the division is exact, one can make a divider similar to a ripple carry array multiplier: http://www.andraka.com/multipli.htm#Ripple%20Carry%20Array%20Multipliers Arrange for the divisor to be odd. If one calculates a reciprocal using Newton-Raphson, each iteration roughly doubles the number of significant bits. The amount of hardware needed to unroll the loop will not be much more than that required for the last iteration. The delay will also be dominated by the delay in the last iteration. 6 or 7 iterations should be sufficient. After taking the reciprocal, one must multiply by it. The amount of hardware required will be roughly twice that required for multiplication. The delay will also be dominated by the delay for the last iteration. I suspect Newton-Raphson is not the OP's idea of simple. A Wallace tree is probably not best for multiplying large numbers. Given 64-bit factors and single-bit partial products, the Karatsuba formula is probably useful. http://mathworld.wolfram.com/KaratsubaMultiplication.htmlArticle: 97729
hi.. check this good page.. this a video timing calculator... http://www.tkk.fi/Misc/Electronics/faq/vga2rgb/calc.html I used it to make a driver for vga and ntsc interfaces... good luke.. luis mexicoArticle: 97730
I'm looking for a functional spec rather than a timing spec. I remember reading an article that described some of the functionality of CGA, EGA and VGA. When most people in this group inquire about a VGA adapter what they are actually asking for is a frame buffer that generates video timing signals for a multisync monitor for an IBM PC. I would like references for the original IBM 256 Kb VGA adapter or something newer (supporting the new video modes). I'd really like to hear from somebody that was connected with or has documentation from TSENG. I seem to remember back in the day their VGA adapter's, for the cost, were a step a head of the competition. I'm going to spend the rest of the night reading the documents from the VESA website. DerekArticle: 97731
Peter Alfke wrote: > For high-pincount and/or high-performance circuits, through-hole has > been dead for more than a decade. > Through-hole means 100 mi (~2.5 mm) pin spacing, which becomes hopeless > above 200 pins. Huh? I'm writing this email on a computer that has a 940-pin through-hole processor with three gigabit interfaces to I/O subsystems. The pin spacing is less than 100 mils.Article: 97732
Brad Smallridge wrote: > Thanks Ray, > > >>The Virtex4 FIFO16 flag logic has a design flaw that renders them >>unreliable for asynch operation. > > > I was aware of this issue. However I thought that I was "safe" since I am > not using the ALMOSTEMPTY flag, nor should the two FIFO16s ever be in a > situation where the ALMOSTEMPTY flag would ever be in transition. It should > always be on. My circuit is fairly straightforward, as soon as the EMPTY > flag comes on, address information is passed from the FIFO output to the > SRAM address lines. There are two FIFOs competing for the address lines > with one FIFO having priority for the situation when both EMPTY flags come > on at the same time. The data is spread out so presently I would never > expect to see more than one data in either FIFO at any time. > > Since the ALMOSTEMPTY flag is always on, I was hoping that it's metastable > effect on the EMPTY flag was circumvented. I guess that's a question for > Xilinx to answer. > > >>You can, however use it synchronously and cascade it with a small async >>coregen fifo implemented in the fabric so that the net effect is an async >>fifo. > > > And the coregen fifo is OK? > > Brad Smallridge > Ai Vision > > > Be careful with that. I thought I was safe too, but I got tripped up by the flag issue. I never looked at almost empty, but I don't think my design should have reached it (I found this problem before Xilinx did). The coregen fifo is fine. It is designed by a separate group, and its design is mature.Article: 97733
In article <1141005334.019869.143020 @j33g2000cwa.googlegroups.com>, dereks314@gmail.com says... > I'm looking for a functional spec rather than a timing spec. I remember > reading an article that described some of the functionality of CGA, EGA > and VGA. When most people in this group inquire about a VGA adapter > what they are actually asking for is a frame buffer that generates > video timing signals for a multisync monitor for an IBM PC. I would > like references for the original IBM 256 Kb VGA adapter or something > newer (supporting the new video modes). > > I'd really like to hear from somebody that was connected with or has > documentation from TSENG. I seem to remember back in the day their VGA > adapter's, for the cost, were a step a head of the competition. _Programmer's Guide to the EGA, VGA, and Super VGA Cards_ by Richard Ferraro has pretty decent descriptions of most of the graphics cards that were current when it was published, including a couple from Tseng Labs. I haven't checked, but I'd guess this has been out of print for a while though, so you'll probably have to find a used copy to get it. The ISBN is (was) 0-201-62490-7. Of course, quite a bit of it won't be applicable to what you're apparently doing, but it has register-level descriptions of the interface the cards presented to the rest of the machine, and for the VGA and similar cards, that tells you quite a bit about its internal structure as well. I would note, however, that much of the design of the VGA (for one example) was a long ways from ideal in a modern system. Much of the architecture of the VGA centered its connection to an 8-bit bus. If you're creating a graphics controller in an FPGA, there's no reason to restrict your bus to it to only 8 bits wide unless you're planning on emulating an entire PC design so you can run PC software on your system. Otherwise, you might as well use a wide bus to connect the graphics to the rest of the system, which will simplify the overall design a LOT. Believe me, the VGA design really put a lot of effort into getting decent performance in spite of a narrow bus, so if you can start from a clean slate, you can do a LOT better. -- Later, Jerry. The universe is a figment of its own imagination.Article: 97734
Ray, with all due respect: Yes, this problem was first found by a couple of customers, and you were probably the first. But only we at Xilinx know the innards of the control logic implementation, and I was the one that clearly traced the problem to the inputs of two internal flip-flops. Having analyzed it, I can assure you that there is nothing mysterious about the misbehavior. It happens very rarely, but always for an identifyable logic reason. It has nothing to do with metastability, it is totally deterministic. The description may sound convoluted, but the behavior is clear. It has to do with the ALMOST EMPTY flag potentially getting and staying inverted. And since ALMOST EMPTY is used as a necessary condition for decoding EMPTY, the EMPTY flag also gets impacted, if, AND ONLY IF, there is a simultaneous read/write operation on the second clock tick AFTER REACHING OR LEAVING THE THRESHOLD OF "ALMOST EMPTY". That's why the failure occurs so rarely in an asynchronous FIFO use. If ALMOST EMPTY stays active,( i.e. if its deactivating threshold is set high enough to never be reached) there will never ever be an erroneous EMPTY flag. We have analyzed this, and then also simulated and tested it, and provoked it. I know all the gory details. That's why I gave Brad this strong assurance that in his design he can 100% rely on the EMPTY flag going active when it should, and going inactive soon after something has been written into the FIFO16. I have had sleepless nights over this. Brad does not have to... Peter Alfke, from homeArticle: 97735
use hyperterminal for the configuration of HT, it depends of the configuration of your uart (speed,data size, parity,stop bit), you should make some application after the uart that take the data from the uart and send it back like this you could check the uart. <zhangweidai@gmail.com> wrote in message news:1140997658.200030.270040@j33g2000cwa.googlegroups.com... > hello > > i plan on synthesizing miniuart (opencores.org) onto a spartan 3 demo > board. but then i dont know how to test it. how do i configure/run a > terminal console in windows to talk with my board? > > thanks, > Peter >Article: 97736
Peter Alfke wrote: > That then also means ball grid arrays, and is great for professional > assembly, but a killer for the hobbyist. The funny thing is that BGA brings SMT to the hobbiest, where high density flat packs have a pitch so narrow that parts can not be hand placed on paste, or easily soldered by hand, even with a stereo microscope that easily, but possible for smaller pin count devices like memory. I've shown that home brew computer group here how to reliably solder BGAs, and reball them, so they can use salvage at low cost. Placing and soldering a 400-700 ball bga is a piece of cake, hand soldering the SDRAM and EEPROMS for the design is REALLY painful for TSOP's. It's actually easier to do powerful hobby designs in BGA, than it was in dip parts.Article: 97737
thanks for the reply. I found some programs that will help me send/receive signals from the pc side. My question now is how do I set up the FPGA? mainly, how should i assign pins when synthesizing miniuart. I want to use rs232Article: 97738
Hi I had downloaded a DDR SDRAM reference design from xilinx for V2P. Actually I have used MIG007 (rev 6) to create the design. This design uses LUT based delay mechanism to align the strobes. The design can be test on Xilinx SL361, ML361, and ML367 boards. I would like to know whether anyone has tested this reference design on non-Xilinx boards? Thanks -- AmyArticle: 97739
Brad Smallridge wrote: >> 1 2 3 >> | | | >> _ _ _ _ _ _ _ _ >>rdclk / \_/ \_/ \_/ \_/ \_/ \_/ \_/ >> _____ ________________ >>empty \_______/ >> ___ >>rden _________/ \________________ > > > This diagram is correct except for the fact that if your RDEN logic > comes from the EMPTY flag, then RDEN will be two pulses long > as well. That could be trouble is you only have one datum. ???? If you set for example rden <= not empty; Then empty will only be negated for 1 cycle. If it's not, then you might be encountering the FIFO16 bug (see Answer Record 22462), but each time reproducing that bug in the async case is quite unlikely ... > > By the way, how do you set up Outlook Express to give Courier > fonts? I don't ... Who use Outlook anyway ... ;p SylvainArticle: 97740
In article <1140781468.330710.322700@e56g2000cwe.googlegroups.com>, devendra.bhale@gmail.com says... > I have xilinx 95108 > I am clocking by 555 timer and testing for some small project. the > 95108 is getting heated up when I connect 555 output to an IO pin(1 > number). > Then i dont understand what to do about it, this is a problem because > it rendered my previous chip non programable when I was doing same > thing? > circuit is > > > (Dip)555 ---> 95108 (plcc 84)-----> cro > > I have hand soldered everything on a general purpose board > > any body had similar problem? > > regards > > On the data sheet there are very long rise and fall times for the 555 (100 ns). Maybe you should put some schmitt trigger buffer between 555 and the cpld. Greetings KlausArticle: 97741
Peter Alfke schrieb: > Brendan, use Virtex-4 instead. There you have the IDELAY that gives you > sub-100 picosecond granularity on the input side, and stability over > temperature and voltage changes. It's meant for your purpose. I'm interested in time to digital conversion, may the IDELAY used therefore? Bye TomArticle: 97742
I have a design with a large PLA, and I'm trying to make it run fast in a Spartan-3. It's too big for block RAM, since it has 25 inputs, slightly fewer than 512 product terms, and 32 outputs. My first attempt was to just translate the PLA equations to VHDL and synthesize it with the default settings. This uses 34% of a 3S500E, and has a minimum cycle time of slightly under 12 ns with over 20 levels of logic. If I turned on timing-directed mapping, or increased the effort levels, I suppose it might get slightly better. But my own analysis suggests that it should be possible to implement the PLA with no more than 11 levels of logic worst case, seven levels for the product terms, and 4 levels for the sums. Anyhow, as the subject line suggests, I'm interested in any clever tricks to get more efficient FPGA implementation of the PLA. For instance, would use of the carry chain speed up wide gates? (Or do the tools already infer that?) Should I put some constraint on the product terms, to keep the tools from merging the product term and sum logic? I'll experiment with this myself, but perhaps someone here has already done this, in which case suggestions would be quite welcome. Thanks, EricArticle: 97743
If you are connecting to a PC serial port you need a RS232 transceiver chip between the signals and the FPGA as the voltage levels are not directly compatible with FPGAs or most other logic chips for that matter. Many development boards have driver chips on board or like us have a add-on module for this if you are using one of these. If you have a development board then the boards with the "fixed" solution will predetermine FPGA pins. Our approach you assign the pins of the header that your are actually using. John Adair Enterpoint Ltd. - Home of Raggedstone1. The board cheaper than a tank of petrol. http://www.enterpoint.co.uk <zhangweidai@gmail.com> wrote in message news:1141020346.303394.315940@i40g2000cwc.googlegroups.com... > thanks for the reply. I found some programs that will help me > send/receive signals from the pc side. My question now is how do I set > up the FPGA? mainly, how should i assign pins when synthesizing > miniuart. I want to use rs232 >Article: 97744
Hi I have made a generic component like below entity fifo is generic( AW : integer; PROG_EMPTY_THD : std_logic_vector(AW downto 0); PROG_FULL_THD : std_logic_vector(AW downto 0) ); port ( rst : in std_logic; write_address : out std_logic_vector(AW-1 downto 0); read_address : out std_logic_vector(AW-1 downto 0); wr_clk : in std_logic; ..... ); This component fifo is instantiated in my design with different values for AW. My design is synthesizable and it is actually working in hardware too. But with Model-sim XE simulator it gives the following error < Object 'aw' cannot be used within the same interface as it is declared. This error is pointed to my component declared package. is there any work around for this problem. any pointers will be helpful. rgds bijoy When model-sim compoiles the fifo module separately, it sees AW constant and it is used in std_logic_vector(AW downto 0) so i think it is not able to determine the width of the vector ..Article: 97745
Hi, I encounter a serious problem with xst when I try to synthesize a design. The design consists of several components that synthesize fine for themselves. However when I want to synthesize the top entity, that only plugs the simple components together, xst fails during the low level synthesis without giving me any reason WHY synthesis failed. It's the same for ISE 6.3, 7.1 and 8.1. Is there a way to find out why xst fails, something like a debug or verbose mode? Or has anyone an idea how to isolate the problem? Thanks in advance! MatthiasArticle: 97746
serial R + C to gnd, sometimes helpsArticle: 97747
hi, There is 4.5 V supply for entire circuit throughout the board. I am testing with schmitt trigger today thanx guies Augast15Article: 97748
hi bijoy , you could declare AW or even different AW (AW1, AW2, AW3 ...) in a separate package (which could also be used for synthesis of course) By doing so Modelsim should have no problems to identify the length of the std_logic vector in the corresponding instantiation. Rgds Andr=E9Article: 97749
XST is synthesising the following verilog code to a 8192x24-bit RAM. module bram_8k (clk, addr, di, do); input clk; input [12:0] addr; input [23:0] di; output reg [23:0] do; reg [23:0] ram [8191:0]; always@(posedge clk) begin ram[addr] <= di; do <= ram[addr]; end endmodule ISE is mapping this 8192x24bit RAM to 12 RAMB16s. But when the device is occupying about 65% resources (with other modules integrated) the RAMs are not placed as neighbors leading to different timing problems on different compilations. The requirement is to allow the ISE-map to place these RAMs closely, either in a column or controlled rectangular array of RAMB16s, after which PAR can place this group optimally anywhere in FPGA based on other modules. I am not able to use RLOC or RLOC_RANGE constraints to accomplish this. Kindly let me know how to control the relative placement of RAMB16s. Thank you and I await your inputs/ suggestions ASAP. K Sudheer Kumar
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z