Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive

Compare FPGA features and resources

Threads starting:

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Rick Filipkiewicz wrote: > Andy Peters wrote: > > Don't be so sarcastic about these parts. They - Intel - had the idea of ISP through > the JTAG port before anyone else was even thinking about ISP. The Flex devices had 2 > other advantages: > > (1) The fitter was incredibly efficient considering the meager resources it had to > work with. It even ran fairly quickly on an i486-66. > > (2) The tools could produce a raw JTAG file i.e. a file consisting of a series of > TDI/TMS pairs. This allowed us to hold the configuration in a socketed EPROM - great > for customer upgrades. Yes, of course Intel has smart designers. Does anybody doubt that ? What they did not understand was economics. These chips were gigantic in size, totally out of proportion to their logic capabilities. So Altera dropped them like a hot potato, after they foolishly had bought the line. In a competitive world, dollars and cents cannot be ignored. Peter Alfke

Matt, good luck. This is quite a project. Read the IEEE standards publications. Old Intel and AMD data sheets may be easier reading... If you want to be IEEE standard compatible you have to deal with the borderline strange cases: •representation of zero is a must ( it's unnatural in FP) •graceful underflow, when the mantissa must be denormalized because the exponent has reached its most negative value, etc This has all been standardized to make sure than any calculation will always give exactly the same result, independent of the hardware. OIf you just want to extend the range of practical numbers, life is much easier. But you still must struggle with zero. Peter Alfke Matt Billenstein wrote: > All, > > I've taken on a project where I'll be implementing a number of math > functions on IEEE double precision floating point types (64 bit). > Multiplication, division, addition, and subtraction are fairly straight > forward. I'll need to do cosine, exponential (e^x), and square roots. Any > advice/pointers/book titles would be appreciated. I'll be implementing in > VHDL targeting a large Xilinx VirtexE device (XCV1000E). Hopefully at 66 or > 100 MHz. > > Thanks, > > Matt > > -- > > Matt Billenstein > REMOVEmbillens@one.net > REMOVEhttp://w3.one.net/~mbillens/

Falk Brunner wrote: > Hmm, what should we expect?? That Altera says the Xilinx parts are > better?? > And Xilinx says the Altera parts are better?? > Both "experiments" have their points, but they both have the smell of > marketing and influenced by company policy. > This is true, but after a while you as user may develop a feeling for which of the marketing departments is taking the greater liberties with the truth. Austin and I are not in marketing, and we both will not tell lies. Obviously, we rejoice when Xilinx is better, and we both live in an environment where everybody thinks that, in most cases, Xilinx offers the superior solution. This is America, everybody wants to be a winner. You as users will vote with your pocketbooks. Recently, the vote has been pretty much in our favor. Peter Alfke, Xilinx Applications

On Sat, 10 Feb 2001 03:40:50 GMT, "Matt Billenstein" <mbillens@mbillens.yi.org> wrote: >All, > >I've taken on a project where I'll be implementing a number of math >functions on IEEE double precision floating point types (64 bit). >Multiplication, division, addition, and subtraction are fairly straight >forward. I'll need to do cosine, exponential (e^x), and square roots. Any >advice/pointers/book titles would be appreciated. I'll be implementing in >VHDL targeting a large Xilinx VirtexE device (XCV1000E). Hopefully at 66 or >100 MHz. > >Thanks, > >Matt Actually even the basic operations are fairly hairy if you have to be fully IEEE compliant especially implementing all rounding options etc. I hope you didn't take this as a fixed price project and wish you good luck. Muzaffer FPGA DSP Consulting http://www.dspia.com

Muzaffer Kal wrote: > > On Sat, 10 Feb 2001 03:40:50 GMT, "Matt Billenstein" > <mbillens@mbillens.yi.org> wrote: > > >All, > > > >I've taken on a project where I'll be implementing a number of math > >functions on IEEE double precision floating point types (64 bit). > >Multiplication, division, addition, and subtraction are fairly straight > >forward. I'll need to do cosine, exponential (e^x), and square roots. Any > >advice/pointers/book titles would be appreciated. I'll be implementing in > >VHDL targeting a large Xilinx VirtexE device (XCV1000E). Hopefully at 66 or > >100 MHz. > > > >Thanks, > > > >Matt > > Actually even the basic operations are fairly hairy if you have to be > fully IEEE compliant especially implementing all rounding options > etc. I hope you didn't take this as a fixed price project and wish you > good luck. Absolutely! What you've taken on is to basically emulate most of the x87 part of a Pentium class processor. Afaik, x86 is the only remaining (after 68K more or less stopped) high-volume part that tries to implement more than the basic/core IEEE fp ops. BTW, FSQRT is part of the core fp set, which means that you have to get this exactly correct, down to the last mantissa bit, for all possible inputs including Inf, NaN, Zero and gradually underflowing numbers. The most reasonable way to do this today would seem to start with a fast fp multiply-add unit, and then implement the trancendentals in the form of table-based stepwise polynomial approximations. OTOH, if you don't have that FMAC unit as a building block, then I guess some other approach might be better, possibly CORDIC. Terje -- - <Terje.Mathisen@hda.hydro.com> Using self-discipline, see http://www.eiffel.com/discipline "almost all programming can be viewed as an exercise in caching"

Peter Alfke wrote: > Rick Filipkiewicz wrote: > > > Andy Peters wrote: > > > > Don't be so sarcastic about these parts. They - Intel - had the idea of ISP through > > the JTAG port before anyone else was even thinking about ISP. The Flex devices had 2 > > other advantages: > > > > (1) The fitter was incredibly efficient considering the meager resources it had to > > work with. It even ran fairly quickly on an i486-66. > > > > (2) The tools could produce a raw JTAG file i.e. a file consisting of a series of > > TDI/TMS pairs. This allowed us to hold the configuration in a socketed EPROM - great > > for customer upgrades. > > Yes, of course Intel has smart designers. Does anybody doubt that ? > What they did not understand was economics. These chips were gigantic in size, totally > out of proportion to their logic capabilities. So Altera dropped them like a hot > potato, after they foolishly had bought the line. > In a competitive world, dollars and cents cannot be ignored. > > Peter Alfke I was only really responding to the ``Volvo'' comment I'd heard this thing about the expense of making them. What I'd also heard was that Altera bought the line for the Flash technology used in the more advanced 880 - maybe not so foolish. Another advanced feature of this line was that the biggest of the devices [8160 ?] was split internally into 2 parts. This allowed, supposedly, partial reconfig while keeping the system running. All that said the - admittedly forced - decision to go Xilinx when Altera bought the family [I assumed they'd be dropped v. quickly] was the best decision I've made. One thing that always tempers my despondency about the SpartanII cock-up is the memory of waiting, waiting, ...., & more waiting for the ISP variants of the MAX 7K to come out.

Phil Hays <spampostmaster@home.com> writes: > Carry chain can be infered by synthesis tools, however the code may not be > highly readable. For example, to create an OR gate: > > OR_temp <= '0' & A & B & C & D & E; > Result_temp = OR_temp + "011111" > Result = Result_temp(5); -- result is zero unless (A or B or C or D or E) = 1 Clever trick! > I'd suggest using a proceedure to improve readability. > > Biggest gain in speed is from using the carry chain for priority encoders, large > AND and OR gates gain some. I agree, but do have examples? You can let the tools infer the carry chain when you want an OR, because the carry chain is used in the same configuration as in an adder. But how can I infer a wide-AND, or a priority encoder? I know how to do use the carry chain when I instantiate it directly, but how can I infer it in the cases other than adders? -- Chris

hello, Does a disabled Flip Flop ( but still clocked...) consume power ? --Erika Sent via Deja.com http://www.deja.com/

hey, A simple question, I want to generate a pulse of 3 clock cycles width every 256 clock cycles I am using Xilinx Virtex FPGA any clue ? thanks --Karen Sent via Deja.com http://www.deja.com/

karenwlead@my-deja.com schrieb: > > hey, > > A simple question, I want to generate a pulse of 3 clock cycles width > every 256 clock cycles > > I am using Xilinx Virtex FPGA > > any clue ? In VHDL it would look like this. library IEEE; use IEEE.STD_LOGIC_1164.ALL; use IEEE.STD_LOGIC_ARITH.ALL; use IEEE.STD_LOGIC_UNSIGNED.ALL; entity pulse is port ( clk: in std_logic; pulse: out std_logic ); architecture arch1 of pulse is signal count: std_logic_vector(7 downto 0); begin process(clk) begin if clk='1' and clk'event then count<=count+1; if count=1 or count=2 or count=3 then pulse<='1'; else pulse<='0'; end if; end if; end process; end arch1; -- MFG Falk

On Sat, 10 Feb 2001 03:40:50 GMT, "Matt Billenstein" <mbillens@mbillens.yi.org> wrote: >All, > >I've taken on a project where I'll be implementing a number of math >functions on IEEE double precision floating point types (64 bit). >Multiplication, division, addition, and subtraction are fairly straight >forward. I'll need to do cosine, exponential (e^x), and square roots. Any >advice/pointers/book titles would be appreciated. I'll be implementing in >VHDL targeting a large Xilinx VirtexE device (XCV1000E). Hopefully at 66 or >100 MHz. Matt - I did floating-point work at AMD many years ago; I still get occasional flashbacks. Here are some random comments that I hope are helpful: Format support vs. full standard support - Supporting the IEEE format is straightforward; supporting the rounding modes, gradual underflow, reserved operands, etc. is messy, time-consuming, and can have severe repercussions on speed and hardware size. If I were implementing floating-point and had a choice, I'd: 1) drop support of gradual underflow and denormalized numbers 2) drop support of reserved operands, e.g., infinities, non-a-number 3) drop support of unbiased rounding. There are many hard-core floating-point people who think such suggestions are heresy. And perhaps your application requires these features. My point is that you don't want to implement these things unless you absolutely have to. Calculating transcendentals - AMD did one of the earliest floating-point processors, the 9511, that used Chebyshev polynomials to calculate transcendental functions. A couple of my colleagues wrote up a description of the algorithms in an Electronic Design article that was published in the late '80s. Other possibilities are: 1) BSD used to be distributed with a floating-point library that used double-precision arithmetic to calculate various transcendentals. I have some source code, dated 1986, that was part of the then-version-7 BSD math library. I assume that something similar is still part of BSD, but am not sure. 2) use CORDIC. References - 1) Computer Arithmetic - Earl Swartzlander, editor. Excellent collection of seminal papers on various aspects of computer arithmetic, including calculation of transcendentals. There are a couple of good papers on CORDIC. 2) Software Manual for the Elementary Functions - Cody and Waite. As the title suggests, the book describes algorithms that were intended for implementation in software. Nevertheless, it may be useful as a reference. Reading over what I've written, I see that I've given no easy answers. Sad to say, I don't think there are any. Bob Perlman Cambrian Design Works

thanks for the reply. i use schematic entry ? In article <3A858A1B.11A3DD6D@gmx.de>, Falk Brunner <Falk.Brunner@gmx.de> wrote: > karenwlead@my-deja.com schrieb: > > > > hey, > > > > A simple question, I want to generate a pulse of 3 clock cycles width > > every 256 clock cycles > > > > I am using Xilinx Virtex FPGA > > > > any clue ? > > In VHDL it would look like this. > > library IEEE; > use IEEE.STD_LOGIC_1164.ALL; > use IEEE.STD_LOGIC_ARITH.ALL; > use IEEE.STD_LOGIC_UNSIGNED.ALL; > > entity pulse is > port ( > clk: in std_logic; > pulse: out std_logic > ); > architecture arch1 of pulse is > signal count: std_logic_vector(7 downto 0); > begin > process(clk) > begin > if clk='1' and clk'event then > count<=count+1; > if count=1 or count=2 or count=3 then > pulse<='1'; > else > pulse<='0'; > end if; > end if; > end process; > end arch1; > > -- > MFG > Falk > Sent via Deja.com http://www.deja.com/

karenwlead@my-deja.com schrieb: > > thanks for the reply. i use schematic entry ? So get professional and use VHDL. Just kidding. ;-))) So, simply take a 8 bit counter (outputs q7-q0), and your pulse output is HIGH when the counter is on 1,2 or 3 This means, take a 2-input AND gate, the 2 inputs are feed by a 6-input AND gate which is feed with /Q7 /Q6 /Q5 /Q4 /Q3 /Q2 (/Qx means inverted) and a 2-input OR which is feed with Q1 Q0 This should work. -- MFG Falk

hey, thanks so much, basically counter+comparator you are right, i should learn VHDL, could you orient me on any valuable book to write 1- efficient vhdl code for FPGA platform 2- testbenches 3- back annotation don't think about the price...the university will pay ;-) thank you again Karen In article <3A859D99.47BA87C9@gmx.de>, Falk Brunner <Falk.Brunner@gmx.de> wrote: > karenwlead@my-deja.com schrieb: > > > > thanks for the reply. i use schematic entry ? > > So get professional and use VHDL. > Just kidding. > ;-))) > > So, simply take a 8 bit counter (outputs q7-q0), and your pulse output > is HIGH when the counter is on 1,2 or 3 > This means, take a 2-input AND gate, the 2 inputs are feed by a 6- input > AND gate which is feed with > > /Q7 /Q6 /Q5 /Q4 /Q3 /Q2 (/Qx means inverted) > > and a 2-input OR which is feed with > > Q1 Q0 > > This should work. > > -- > MFG > Falk > Sent via Deja.com http://www.deja.com/

A disabled flip-flop consumes the same power as a flip-flop with constant data at its D input ( because that's really what the CE input achieves ). This appies to all Xilinx parts. We dont do clock gating ( except in Virtex-II, where we allow you to convert the global clock buffer into a neat clock multiplexer.) Peter Alfke, Xilinx Applications ================================= erika_uk@my-deja.com wrote: > hello, > > Does a disabled Flip Flop ( but still clocked...) consume power ? > > --Erika > > Sent via Deja.com > http://www.deja.com/

Simple: You build an 8-bit counter and then you feed the outputs into two 4-input look-up tables. The upper one detects a unique combination, e.g. all zeros. The other LUT detects the three adjacent codes ( out of 16 ). Then you AND the two LUT outputs. So the whole thing takes less than three CLBs in Virtex. For high speed, you can register (pipeline) the output, for free. You can run this at 200 MHz, if you feel like it. See, I did not use the ho..ork word. :-) Peter Alfke, Xilinx Applications ========================================= karenwlead@my-deja.com wrote: > hey, > > A simple question, I want to generate a pulse of 3 clock cycles width > every 256 clock cycles > > I am using Xilinx Virtex FPGA > > any clue ? > > thanks > > --Karen > > Sent via Deja.com > http://www.deja.com/

"Matt Billenstein" <mbillens@mbillens.yi.org> writes: >I've taken on a project where I'll be implementing a number of math >functions on IEEE double precision floating point types (64 bit). >Multiplication, division, addition, and subtraction are fairly straight >forward. I'll need to do cosine, exponential (e^x), and square roots. Any >advice/pointers/book titles would be appreciated. I'll be implementing in >VHDL targeting a large Xilinx VirtexE device (XCV1000E). Hopefully at 66 or >100 MHz. You don't say how fast it needs to be. Do you mean one FP operation per cycle at 66 or 100MHz? Maybe not. If you need to be fast (why else an FPGA implementation) you will want a well pipelined design. There are many refererences on pipelined ALU's, back to the IBM 360/91. This means a barrel shifter for pre/post normalization which will be a lot of CLB. You could also use the 360/91 style division algorithm which is an iterative algorithm that converges toward the quotient. How fast does it really need to be, expecially for divide, cos, exp, sqrt? If it doesn't need to be super fast you could implement a relatively simple processor and a ROM control unit. For the more complicated functions, you will probably want this, anyway. I hope this helps, -- glen

"Chris G. Schneider" wrote: > I agree, but do have examples? You can let the tools infer the carry chain when > you want an OR, because the carry chain is used in the same configuration as in an > adder. But how can I infer a wide-AND, or a priority encoder? Wide AND is: AND_temp <= '0' & A & B & C & D & E; Result_temp = AND_temp + "000001" Result = Result_temp(5); -- result is zero unless (A AND B AND C AND D AND E) = 1 Again, I'd suggest using a proceedure to improve readability. -- Phil Hays

Hi, After an adventurous journey, VHDL Mode has finally found a new, permanent, and reliable home: http://opensource.ethz.ch/emacs/vhdl-mode.html At the same time, VHDL Mode 3.31 is released. It is basically what has already been out there as 3.31 beta release for quite some time. See the web page for release and installation notes. Even though there are still some interesting new features in the pipeline, I can't promise anything at this point. There are other things in life than just hacking freeware code. I guess this winter was simply not rainy enough here in the Pacific Northwest :-) Have fun Reto

Hi ! Here is a schematic diagram for your counter : http://www3.sympatico.ca/erv/timgen.gif waveforms : http://www3.sympatico.ca/erv/timgensim.gif It uses less than 3 CLB (10 Virtex slices), it can be clocked pretty fast (1 Flop+1 LUT delay) and the output is glitch free. Only drawback is that the LFSR consumes more power than an equivalent binary counter. you can change the pulse start / stop position by inserting inverters to match the start / stop count (do not forget it's not a binary sequence, using the simulator is usually the fastest way to find the required values). Adding more start/stop logic blocks allows you to generate more output pulses from the same counter. adding negative edge triggered flops before the RS inputs allows you to double the resolution (IE, a flop on the pulse end RS input would make the pulse last for 3.5 clock periods) You can find more info about LFSR counters here : http://www.xilinx.com/xapp/xapp052.pdf ---------- I use it in space efficient graphic LCD/VFD modules timing generators. ---------- BTW, the remark about schematic being "unprofessional" reminds me similar comments about Assembly language usage ( usually made by VB users ;-). regards, Eric Vinter ----------------------- karenwlead@my-deja.com wrote: > thanks for the reply. i use schematic entry ? > > In article <3A858A1B.11A3DD6D@gmx.de>, > Falk Brunner <Falk.Brunner@gmx.de> wrote: > > karenwlead@my-deja.com schrieb: > > > > > > hey, > > > > > > A simple question, I want to generate a pulse of 3 clock cycles > width > > > every 256 clock cycles > > > > > > I am using Xilinx Virtex FPGA > > > > > > any clue ? > > > > In VHDL it would look like this. > > > > library IEEE; > > use IEEE.STD_LOGIC_1164.ALL; > > use IEEE.STD_LOGIC_ARITH.ALL; > > use IEEE.STD_LOGIC_UNSIGNED.ALL; > > > > entity pulse is > > port ( > > clk: in std_logic; > > pulse: out std_logic > > ); > > architecture arch1 of pulse is > > signal count: std_logic_vector(7 downto 0); > > begin > > process(clk) > > begin > > if clk='1' and clk'event then > > count<=count+1; > > if count=1 or count=2 or count=3 then > > pulse<='1'; > > else > > pulse<='0'; > > end if; > > end if; > > end process; > > end arch1; > > > > -- > > MFG > > Falk > > > > Sent via Deja.com > http://www.deja.com/

Falk Brunner wrote: > > karenwlead@my-deja.com schrieb: > > > > thanks for the reply. i use schematic entry ? > > So get professional and use VHDL. > Just kidding. > ;-))) > > So, simply take a 8 bit counter (outputs q7-q0), and your pulse output > is HIGH when the counter is on 1,2 or 3 > This means, take a 2-input AND gate, the 2 inputs are feed by a 6-input > AND gate which is feed with > > /Q7 /Q6 /Q5 /Q4 /Q3 /Q2 (/Qx means inverted) > > and a 2-input OR which is feed with > > Q1 Q0 > > This should work. How about when the counter counts? 01111111 10000000 Hint for the h-work kid: look up static hazard in your logic book. ----------------------------------------------------------------------- rk A designer has arrived at perfection stellar engineering, ltd. not when there is no longer anything stellare@erols.com.NOSPAM to add, but when there is no longer Hi-Rel Digital Systems Design anything to take away - Bentley, 1983

Typically, people make custom symbols, separated into functional groups, with actual signal names on the pins, not one generic XCV2000E-BG560C symbol... At least that's the way I've been doing it, and everyone else, but one person I know has been too... Gil Golov <golov@sony.de.REMOVE_THIS> wrote in article <3A83BE21.7FF9A0FD@sony.de.REMOVE_THIS>... > Does anybody have this symbol for Orcad capture? > > Thanks very much in advance. > > Gil Golov > > >

Hi. I know this is OT, but since I have had no success in finding help I try here anyway. I am working on a CPDL controlling the transfer of data and signals from 2 pieces of dualport sram (1024*8) to LCD (EG2401S-AR), 256*64 pixels. I have datasheet for EG2401S-ER, I use it even though it's not 100% the right one. This LCD almost lacks own logic, many signals need to be generated etc. The LCD works fine now, I can put things into memory and it will come out fine on the display... The problem is that when I try to write to the display on position (x,y) = (0,0) then it ends up at (255,62), (10,0) @ (245,62), (10,10) @ (245,52) ... and the datasheet says that it should begin counting from upper left corner... mine counts from (lower-1) right corner, and counts upwards towards lower Y-value (if (0,0) is upper left corner) I have verified that all timing in datasheet is met, still it doesn't behave properly. Has anyone of you had any similar experience will LCD diaplays? /Daniel Nilsson

Matt Billenstein wrote: > I've taken on a project where I'll be implementing a number of math > functions on IEEE double precision floating point types (64 bit). > Multiplication, division, addition, and subtraction are fairly straight > forward. I'll need to do cosine, exponential (e^x), and square roots. Any > advice/pointers/book titles would be appreciated. I'll be implementing in > VHDL targeting a large Xilinx VirtexE device (XCV1000E). Hopefully at 66 or > 100 MHz. A few years ago in Dr. Dobb's Journal, there was an artical about the Intel division snafu. That might have some good references. Kevin

Sorry for the off-topic and cross-post but I was curious (since we have the attention of so many now) if more "intelligent" floating point scheme exists (i.e. non-IEEE 754/854)? I know computers performed math before Intel's spec so of course there will be dozens of proprietary formats... It feels like manipulation of floating point data in the 754/854 formats is more cumbersome than it needs to be. Any there any other schemes that are "simpler" (besides fixed point, etc) and/or easier to implement? Any implementations that nicely lend themselves to FPGAs? Obviously one will have to make a trade offs such as bit-size vs. precision, etc. but I'm inquiring about a general schemes... Thanks! VR.

Compare FPGA features and resources

Threads starting:

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search