Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 99125

Article: 99125
Subject: Re: memories for virtex-4 and Spartan-3E
From: "Andy Peters" <Bassman59a@yahoo.com>
Date: 20 Mar 2006 10:13:30 -0800
Links: << >>  << T >>  << A >>
bachimanchi@gmail.com wrote:
> Hi all,
> i am looking for VHDL instantiation of BRAMs and LUT memories .i copied
> those instantiations from libraries of Virtex-4 and Spartan-3E. But
> still they are not getting synthesized and implemented.Can someone
> suggest me where to look for these instantiations or can some one send
> me codes for BRAMs and LUT memories in Virtex-4.i need them urgently.

The XST manual will tell you how to write code that infers RAMs.

-a


Article: 99126
Subject: Re: ignore thread
From: Jan Panteltje <pNaonStpealmtje@yahoo.com>
Date: Mon, 20 Mar 2006 18:30:00 GMT
Links: << >>  << T >>  << A >>
On a sunny day (Mon, 20 Mar 2006 17:54:47 GMT) it happened "John_H"
<johnhandwork@mail.com> wrote in <H%BTf.4501$tT.1131@news01.roc.ny>:

>Outlook Express (which my Outlook uses as its newsreader) has "Ignore 
>Conversations" and "Block Sender..." available from the Message menu.  First 
>time I've used it.

All so primitive, the newsreader I wrote (Linux NewsFlex, this one:
 http://panteltje.com/panteltje/newsflex/index.html ) has been around now for
almost 8 years... it has a very powerful filter option, probably the best [one] is:
" make any posting with words '...you name it' invisible. "
ANY posting.
So if I was to enter Xilinx in there, there would be one less FPGA manufacturer.
Because it is so powerful, I usually do not bother to use the filters, as it is normally
obvious (I always get headers first) what is interesting and what not.
Some posts I read for amusment, some to learn.
Tonight I was wondering if there was a relation between the quality of the posts
[from a company] and the quality of the software a company makes.
hehe
OK ignore thread.

Article: 99127
Subject: Re: DDS
From: "maxascent" <maxascent@yahoo.co.uk>
Date: Mon, 20 Mar 2006 12:33:08 -0600
Links: << >>  << T >>  << A >>
>Is the accuracy of your time base 1e-14?
>Calculate to 9 digits.
>Measure with a micrometer.
>Mark with chalk.
>Cut with an axe.
>
>Are your specs really necessary?

Not really I may alter them ;)

Jon

>
>"maxascent" <maxascent@yahoo.co.uk> wrote in message 
>news:54CdncX_96TYXYPZRVn_vA@giganews.com...
>>I would like to implement a DDS on a Spartan device. Here are is my
spec.
>> Clocked at 200MHz. Max Output of 80MHz. 1uHz increments. Is this
possible
>> on a Spatan?
>>
>> Cheers
>>
>> Jon 
>
>
>



Article: 99128
Subject: Re: Xilinx programming cable; Linux notebook w/o parallel port; Am I doomed?
From: cs_posting@hotmail.com
Date: 20 Mar 2006 10:35:39 -0800
Links: << >>  << T >>  << A >>
GHEDWHCVEAIS@spammotel.com wrote:

> I bowsed through some threads about using the various Xilinx
> programming cables. It seems like with my Linux notebook without a
> parallel port I am doomend and not able to use any of them.

Gues you'll just have to make your own.

Find yourself a (linux compatible) USB-enabled microcontroller and
treat the problem as one of in circuit configuration of an FPGA by a
micro.

You'll get higher rates if you let the micro do the timing and feed it
buffered data through the USB.  Trying to bit-bang it through the USB
is possible, but really slow given the higher latency of things like
USB-to-parallel chips vs old parallel ports.


Article: 99129
Subject: Re: Instantiating addsub, comparators in Xilinx
From: Mike Treseler <mike_treseler@comcast.net>
Date: Mon, 20 Mar 2006 10:58:56 -0800
Links: << >>  << T >>  << A >>
Leow Yuan Yeow wrote:
> Hi, for a program such as
> case state is
> when S0=>
>   A <= B + C;
> when S1=>
>   Z <= X + Y;
> does it mean that 2 adders are generated, or will the synthesis recognize 
> the adder can be shared?
> Or to I have to specifically write a multiplexor for the adder? Thanks!

There are no guarantees either way.
But it doesn't really matter.
There are not really any "adder" primitives
inside the fpga. Only gates and flops.

All synthesis guarantees is a netlist that
simulates the same as the source code.
There is no guarantee that the RTL or
technology schematic output
will look like I expect. But it will work.

If I code for an input mux with
one adder, I just might get two adders
and an output mux, if that better
matches the constraint settings or the whim
of the synthesis algorithm.
Or I might get just what I expect.
Or I might get something completely different.

Luckily, synthesis does a better
job, on the average, of packing gates
into a arbitrary device than I do.

           -- Mike Treseler

Article: 99130
Subject: Re: FPGA FIR advice
From: "Isaac Bosompem" <x86asm@gmail.com>
Date: 20 Mar 2006 10:59:37 -0800
Links: << >>  << T >>  << A >>

Allan Herriman wrote:
> On 20 Mar 2006 07:41:37 -0800, "Isaac Bosompem" <x86asm@gmail.com>
> wrote:
>
> >
> >John_H wrote:
> >> Isaac Bosompem wrote:
> >> > Hi Ray and Peter,
> >> >
> >> > I am sorry for hijacking your thread Roger , but I think my question is
> >> > relevant.
> >> >
> >> > I was thinking of using about 8 FIR (bandpass) filters in parallel to
> >> > create a graphic equalizer. Now I know there are some phase problems
> >> > with this method but it seems to me like a very logical way to go about
> >> > this problem. I was wondering if you guys know of any better methods?
> >> >
> >> > I also was thinking of using 16 taps each.
> >> >
> >> > 320 FF's is not a lot actually. My XC3S200 (which is probably dirt
> >> > cheap) has almost 4000 FF's. Enough for your filter network and much
> >> > more.
> >> >
> >> > -Isaac
> >>
> >> Another great advantage to FPGA FIRs: most of the time the FIRs are
> >> symmetric which allows half the taps (half the multipliers) to implement
> >> the full FIR by adding t-d to t+d before multiplying by the common
> >> coefficient, the implementation is more elegant.
> >
> >Hi John,
> >
> >I cannot see what you mean? Can you offer a quick example?
>
> An FIR filter is implemented as a dot product of a constant vector and
> a vector made up of the input samples delayed,
>
> y[n] = c[0].x[n] + c[1].x[n-1] + ... c[m-1].x[n-m+1]
> for an m tap filter.
>
> The c[n] are the coefficients.  If the filter has a linear phase
> response, the coefficients are symmetric, so
> c[0] = c[m-1], c[1] = c[m-2], etc.
>
> We can group the expression for y[n] as follows:
>
> y[n] = c[0].(x[n] + x[n-m+1) + c[1].(x[n-1] + x[n-m+2]) + ...
>
> This has (roughly) halved the number of multipliers.
> I say roughly, because m is often odd.
>
> Regards,
> Allan

Ahh, I see, thanks.

Do you guys know of a good filter design software, I have an old one
for DOS, but it is quite difficult to use (Also I do not have access to
MATLAB).

-Isaac


Article: 99131
Subject: Re: Urgent Help Needed!!!!!
From: Eric Smith <eric@brouhaha.com>
Date: 20 Mar 2006 11:07:49 -0800
Links: << >>  << T >>  << A >>
fpga_toys@yahoo.com writes:
> The cultural and class bigotry in this tread is appalling. If some of

There are some people doing appalling things in this thread, but I don't
think we would be likely to agree on who is doing them.

Article: 99132
Subject: Re: SerialATA with Virtex-II Pro
From: Thomas Maaø Langås <tlan@stud.ntnu.no>
Date: Mon, 20 Mar 2006 19:42:12 +0000 (UTC)
Links: << >>  << T >>  << A >>
Antti <Antti.Lukats@xilant.com> wrote:
> marvell has PATA<->SATA bridge that is available for low volume
> ordering - it easier to connect than PCI-SATA

The Virtex-II Pro will have no trouble doing USB 2.0, right?  So doing 
something like this:

SATA device <-> USB 2.0 Bridge <-> Virtex-II Pro <-> USB 2.0 Bridge <-> SATA Host

I know the first bridge can be done with this chip:
http://www.jmicron.com.tw/PDF/JM20338/JM20339.PDF

I don't know if that chip would be able to do the last bridge (eg. the one
between FPGA and SATA Host).

-- 
Thomas

Article: 99133
Subject: Re: Urgent Help Needed!!!!!
From: Ralf Hildebrandt <Ralf-Hildebrandt@gmx.de>
Date: Mon, 20 Mar 2006 21:26:23 +0100
Links: << >>  << T >>  << A >>
fpga_toys@yahoo.com wrote:


> Just being a devils advocate here, partly because I have a terrible
> time with my dyslexia ...
...
> Tech speak, is part of this segment of the electronic world ... as I
> say ... get used to it.

Making mistakes is one part. A totally different part is using "a 
different language".

100 years ago it was a sign of high education to speak French during the 
meal. At this time it was also common to discuss scientific topics in 
German. Today it is "cool" to use these abbreviations. What comes next? 
L33t?

And - IMHO - what you call "tech speak" is chatting about some popular 
technical topics with a friend living next door. It is not a 
(scientific) discussion.

English as the world's communication language is a gift, because it is 
not a too hard and for a lot of people a quite easy language. Don't make 
it difficult.


Again: Mistakes happen. I can imagine, that this is especially hard for 
you, but everybody will notice, that mistakes are "random" (in contrast 
to the discussed abbreviations) and if you mention your dyslexia most 
people will understand that. (If not call them idiots and ignore them.)

Ralf

Article: 99134
Subject: Re: Urgent Help Needed!!!!!
From: fpga_toys@yahoo.com
Date: 20 Mar 2006 12:50:49 -0800
Links: << >>  << T >>  << A >>

Ralf Hildebrandt wrote:
> Again: Mistakes happen. I can imagine, that this is especially hard for
> you, but everybody will notice, that mistakes are "random" (in contrast
> to the discussed abbreviations) and if you mention your dyslexia most
> people will understand that. (If not call them idiots and ignore them.)

The mistake I was objecting too was the public lynching, and the
ranting to justify it. The resulting purpose is only to make certain
people uncomfortable posting here. It did nothing to further the real
discussion topic.



Emily Postnews ... this is sarcasm:

Q: Another poster can't spell worth a damn. What should I post?

A: Post a followup pointing out all the original author's spelling and
grammar mistakes. You were almost certainly the only one to notice
them, genius that you are, so not only will others be intrigued at your
spelling flame, but they'll get to read such fine entertainments rather
than any actual addressing of the facts or issues in the message.


Article: 99135
Subject: PacoBlaze with multiply and 16-bit add/sub instructions
From: "Pablo Bleyer Kocik" <pablobleyer@hotmail.com>
Date: 20 Mar 2006 13:02:57 -0800
Links: << >>  << T >>  << A >>
 Hello people.

 As I announced some days ago, I updated the PacoBlaze3 core
[http://bleyer.org/pacoblaze/] now with a wide ALU that supports an 8x8
multiply instruction ('mul') and 16-bit add/sub operations ('addw',
'addwcy', 'subw', 'subwcy'). The new extension core is called
PacoBlaze3M. It could be useful performing small DSP functions and math
subroutines when there is a spare hardware multiplier block.

 The implementation scheme modifies the PicoBlaze register model
dividing it in odd/even (high/low) sections with a multiplexing layer.
16-bit writes are performed on both odd/even registers. The multiply
operation accepts any two arbitrary registers and the wide add/sub
instructions operate on contiguous 16-bit "extended" registers.

 Eg: (KCAsm code)

---8<---

test_mul: ; mul example
	load s0, $ca ; s0 = 0xca
	load s2, $fe ; s2 = 0xfe
	mul s0, s2 ; {s1,s0} = 0xca * 0xfe = 0xc86c

test_addw: ; addw example
	load s1, $ca ; s1 = 0xca ; mix cafe...
	load s0, $fe ; s0 = 0xfe

	load s3, $be ; s3 = 0xbe ; ...with beef
	load s2, $ef ; s2 = 0xef

	addw s2, s0 ; {C,s3,s2} = 0xbeef + 0xcafe = 0x189ed ; yes, you got
189'ed :oP

--->8---

 I am having a bit of trouble intercepting the adder carry in a carry
chain with ISE using behavioral code. I am currently using two muxed
adders (one 8-bit, one 16-bit) for the addsub module instead of the
ideal high/low 8-bit adders with full and half carries. Any ideas on
how to implement this in ISE?

 I will focus now in adding better documentation and some verification
scripts. I also have a small language on the works (sarKCAsm --how
original) that is a macro assembler with operations to code in
Pico/PacoBlaze using commands like s0 = s1+s2, s4 += s5, etc. I will
release that as soon as I finish teaching myself ANTLR.

 Enjoy & rejoice ;o)

--
PabloBleyerKocik /"The danger from computers is not that they will
 pablo          / eventually get as smart as men, but that we will
  @bleyer.org  / meanwhile agree to meet them halfway."-Bernard Avishai


Article: 99136
Subject: Re: Xilinx programming cable; Linux notebook w/o parallel port; Am I doomed?
From: "GHEDWHCVEAIS@spammotel.com" <GHEDWHCVEAIS@spammotel.com>
Date: 20 Mar 2006 13:07:34 -0800
Links: << >>  << T >>  << A >>
cs_posting@hotmail.com wrote:
> Gues you'll just have to make your own.
>
> Find yourself a (linux compatible) USB-enabled microcontroller and
> treat the problem as one of in circuit configuration of an FPGA by a
> micro.
>
> You'll get higher rates if you let the micro do the timing and feed it
> buffered data through the USB.  Trying to bit-bang it through the USB
> is possible, but really slow given the higher latency of things like
> USB-to-parallel chips vs old parallel ports.

I am not sure whether I understand that right. Are you talking about a
general USB-to-parallel adapter that would work together with parallel
cabel IV in connection with impact? Or is there another way to solve
that?

I have been thinking about a USB-to-parallel adapter.

One thing I would like to avoid is having to program my own driver and
preferable I would like to use an available virtual COM port driver.
However, as I understand it, impact does not recognize those.


Article: 99137
Subject: Re: Xilinx programming cable; Linux notebook w/o parallel port; Am I doomed?
From: "GHEDWHCVEAIS@spammotel.com" <GHEDWHCVEAIS@spammotel.com>
Date: 20 Mar 2006 13:10:09 -0800
Links: << >>  << T >>  << A >>
John Adair wrote:
> I have seen PCMCIA/Cardbus plug in parallel ports that could solve your
> issue.

Unfortuanetly my notebook does not have a PCMCIA card.

Not sure what HP was thinking about. I guess I should not have gotten
such a cheap one ;)


Article: 99138
Subject: Re: microprocessor design: where to go from here?
From: burn.sir@gmail.com
Date: 20 Mar 2006 13:23:56 -0800
Links: << >>  << T >>  << A >>
G=F6ran Bilski wrote:
> My bible on CPU design is "Computer Architecture, A Quantitative Approach=
"=2E
>
> I never stop reading it.
>
> G=F6ran Bilski


Hello G=F6ran and Ziggy, and thanks for your replies.

G=F6ran: I have the book right here on my desk and it is great. However,
I was looking for something more hands on. You know, more code &
algorithms and less statistics :) Something like a grad level textbook.

ziggy: Haven't checked out Leon3 yet, but I looked around in some newer
SPARC architectures and the only thing i learned was that large
projects in Verilog can get really messy.

.=2E.oh no, i just started a flamefest :(

- Burns


Article: 99139
Subject: Re: FPGA FIR advice
From: Ben Twijnstra <btwijnstra@gmail.com>
Date: Mon, 20 Mar 2006 22:40:03 +0100
Links: << >>  << T >>  << A >>
Hi Isaac,

> Do you guys know of a good filter design software, I have an old one
> for DOS, but it is quite difficult to use (Also I do not have access to
> MATLAB).

You could try Altera's FIR Compiler. Even if you don't have a license for
generating the actual FIR cores it still generate the coefficients for you,
allows you to compare real-vs-discrete characteristics, see the effects of
windowing etc.


Ben

Article: 99140
Subject: Re: for all those who have stopped listening, and are ranting now...
From: fpga_toys@yahoo.com
Date: 20 Mar 2006 13:46:05 -0800
Links: << >>  << T >>  << A >>

Jim Granville wrote:
> Why not take them a sound business plan, I'm sure they would listen ?

I was told once they have some adversion to becoming a systems company,
along with some NIH factors that might make a new kid on the block a
little unwelcome if waving a $3-5B business plan in the air.

I have at times been looking for an RC startup as a senior architect
and/or CTO, plus considering seeking funding based on my own work. I'd
still like to build the multi petaflop system I proposed to several
firms last year, using a large number of XC4VLX200's and RM9000's. Then
spin a few wafers with different programmable architecture to push past
an exaflop by decade end. In the short term I have a few student boards
to build, and finish my proof of concept work.

I've already said more here than I would have planned, but maybe that's
good, as Xilinx's competitors have something to consider about doing
this business right. Peter can keep pushing, and I might even level
their playing field a little more. They might even want to shut me up
by giving me the briefcase full of XC4VLX200's to do the proof of
concept machine right, so I can go sell petaflop RC super computers
with Xilinx defect managed parts instead of A-Team parts. Or maybe
there is an A-Team that is really interested in becoming a $5B company
this decade.


Article: 99141
Subject: Fixed vs Float ?
From: "Roger Bourne" <rover8898@hotmail.com>
Date: 20 Mar 2006 13:56:01 -0800
Links: << >>  << T >>  << A >>
Hello all,

Concerning digital filters, particurlarly IIR filters, is there a
preferred approach to implementation - Are fixed-point preferred over
floating-point calculations ? I would be tempted to say yes. But, my
google search results leave me baffled for it seems that floating-point
computations can be just as fast as fixed-point.
Furthermore, assuming that fixed-point IS the preferred choice, the
following question crops up:
If the input to the digital filter is 8 bits wide and the coefficents
are 16 bits wide, then it would stand to reason that the products
between the coefficients and the digital filter intermediate data
values will be 24 bits wide. However, when this 24-bit value is to get
back in the delay element network (which is only 8 bits wide), some
(understatemen) resolution will be lost. How is this resolution loss
dealt with? so it will lead to an erroneous filter?  

-Roger


Article: 99142
Subject: Re: PacoBlaze with multiply and 16-bit add/sub instructions
From: Jim Granville <no.spam@designtools.co.nz>
Date: Tue, 21 Mar 2006 10:02:16 +1200
Links: << >>  << T >>  << A >>
Pablo Bleyer Kocik wrote:

>  Hello people.
> 
>  As I announced some days ago, I updated the PacoBlaze3 core
> [http://bleyer.org/pacoblaze/] now with a wide ALU that supports an 8x8
> multiply instruction ('mul') and 16-bit add/sub operations ('addw',
> 'addwcy', 'subw', 'subwcy'). The new extension core is called
> PacoBlaze3M. It could be useful performing small DSP functions and math
> subroutines when there is a spare hardware multiplier block.
> 
>  The implementation scheme modifies the PicoBlaze register model
> dividing it in odd/even (high/low) sections with a multiplexing layer.
> 16-bit writes are performed on both odd/even registers. The multiply
> operation accepts any two arbitrary registers and the wide add/sub
> instructions operate on contiguous 16-bit "extended" registers.
> 
>  Eg: (KCAsm code)
> 
> ---8<---
> 
> test_mul: ; mul example
> 	load s0, $ca ; s0 = 0xca
> 	load s2, $fe ; s2 = 0xfe
> 	mul s0, s2 ; {s1,s0} = 0xca * 0xfe = 0xc86c
> 
> test_addw: ; addw example
> 	load s1, $ca ; s1 = 0xca ; mix cafe...
> 	load s0, $fe ; s0 = 0xfe
> 
> 	load s3, $be ; s3 = 0xbe ; ...with beef
> 	load s2, $ef ; s2 = 0xef
> 
> 	addw s2, s0 ; {C,s3,s2} = 0xbeef + 0xcafe = 0x189ed ; yes, you got
> 189'ed :oP
> 
> --->8---
> 
>  I am having a bit of trouble intercepting the adder carry in a carry
> chain with ISE using behavioral code. I am currently using two muxed
> adders (one 8-bit, one 16-bit) for the addsub module instead of the
> ideal high/low 8-bit adders with full and half carries. Any ideas on
> how to implement this in ISE?
> 
>  I will focus now in adding better documentation and some verification
> scripts. I also have a small language on the works (sarKCAsm --how
> original) that is a macro assembler with operations to code in
> Pico/PacoBlaze using commands like s0 = s1+s2, s4 += s5, etc. I will
> release that as soon as I finish teaching myself ANTLR.
> 
>  Enjoy & rejoice ;o)

Sounds impressive.
You have seen the AS Assembler, and the Mico8 from Lattice ?

FWIR the Mioo8 is very similar to PicoBlaze ( as expected, both are
tiny FPGA targeted CPUs ), but I think with a larger jump and call reach 
(but simpler RET options).
If you are loading on features, the call-lengths might need attention ?

Have you tried targeting this to a lattice device ?

-jg




Article: 99143
Subject: Re: Xilinx programming cable; Linux notebook w/o parallel port; Am I doomed?
From: fpga_toys@yahoo.com
Date: 20 Mar 2006 14:08:39 -0800
Links: << >>  << T >>  << A >>

GHEDWHCVEAIS@spammotel.com wrote:
> I bowsed through some threads about using the various Xilinx
> programming cables. It seems like with my Linux notebook without a
> parallel port I am doomend and not able to use any of them.

The only other solution I can think of is pretty nasty, and that is
doing some linux driver work to emulate the bit banging over the usb
cable parallel port.


Article: 99144
Subject: Re: Fixed vs Float ?
From: Tim Wescott <tim@seemywebsite.com>
Date: Mon, 20 Mar 2006 14:19:19 -0800
Links: << >>  << T >>  << A >>
Roger Bourne wrote:

> Hello all,
> 
> Concerning digital filters, particurlarly IIR filters, is there a
> preferred approach to implementation - Are fixed-point preferred over
> floating-point calculations ? I would be tempted to say yes. But, my
> google search results leave me baffled for it seems that floating-point
> computations can be just as fast as fixed-point.
> Furthermore, assuming that fixed-point IS the preferred choice, the
> following question crops up:
> If the input to the digital filter is 8 bits wide and the coefficents
> are 16 bits wide, then it would stand to reason that the products
> between the coefficients and the digital filter intermediate data
> values will be 24 bits wide. However, when this 24-bit value is to get
> back in the delay element network (which is only 8 bits wide), some
> (understatemen) resolution will be lost. How is this resolution loss
> dealt with? so it will lead to an erroneous filter?  
> 
> -Roger
> 
This is a simple question with a long answer.

Floating point calculations are always easier to code than fixed-point, 
if for no other reason than you don't have to scale your results to fit 
the format.

On a Pentium in 'normal' mode floating point is just about as fast as 
fixed point math; with the overhead of scaling floating point is 
probably faster -- but I suspect that fixed point is faster in MMX mode 
(someone will have to tell me).  On a 'floating point' DSP chip you can 
also expect floating point to be as fast as fixed.

On many, many cost effective processors -- including CISC, RISC, and 
fixed-point DSP chips -- fixed point math is significantly faster than 
floating point.  If you don't have a ton of money and/or if your system 
needs to be small or power-efficient fixed point is mandatory.

In addition to cost constraints, floating point representations use up a 
significant number of bits for the exponent.  For most filtering 
applications these are wasted bits.  For many calculations using 16-bit 
input data the difference between 32 significant bits and 25 significant 
bits is the difference between meeting specifications and not.

For _any_ digital filtering application you should know how the data 
path size affects the calculation.  Even though I've been doing this for 
a long time I don't trust to my intuition -- I always do the analysis, 
and sometimes I'm still surprised.

In general for an IIR filter you _must_ use significantly more bits for 
the intermediate data than the incoming data.  Just how much depends on 
the filtering you're trying to do -- for a 1st-order filter you usually 
to do better than the fraction of the sampling rate you're trying to 
filter, for a 2nd-order filter you need to go down to that fraction 
squared*.  So if you're trying to implement a 1st-order low-pass filter 
with a cutoff at 1/16th of the sample rate you need to carry more than 
four extra bits; if you wanted to use a 2nd-order filter you'd need to 
carry more than 8 extra bits.

Usually my knee-jerk reaction to filtering is to either use 
double-precision floating point or to use 32-bit fixed point in 1r31 
format.  There are some less critical applications where one can use 
single-precision floating point or 16-bit fractional numbers to 
advantage, but they are rare.

* There are some special filter topologies that avoid this, but if 
you're going to use a direct-form filter out of a book you need fraction^2.

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Posting from Google?  See http://cfaj.freeshell.org/google/

Article: 99145
Subject: FATAL_ERROR while creating a test bench waveform (ISE WebPack 8.1.01i)
From: "bobrics" <bobrics@gmail.com>
Date: 20 Mar 2006 14:21:51 -0800
Links: << >>  << T >>  << A >>
Hi,

I am getting the following error when trying to add test bench waveform
to one of my design files. After that, ISE WebPack 8.1.01i crashes
completely!

FATAL_ERROR:HDLParsers:vhptype.c:174:$Id: vhptype.c,v 1.8 2005/04/29
15:34:22 drm Exp $:200 - INTERNAL ERROR... while parsing
"C:/Xilinx/work/myfile.vhd" line 244. Contact your hot line. Process
will terminate.... etc..

which is followed by another error window:

FATA_ERROR:GuiUtilities:Gq_Application.c:570:1.12.10.2 - This
application has discovered an exceptional condition from which it
cannot recover. Process will terminate.  etc...

I have searched the answers database and didn't find anything useful.
What do you think? How should I fix that problem?

I will try to test with the other source files. 

Thank you


Article: 99146
Subject: Re: Fixed vs Float ?
From: Tim Wescott <tim@seemywebsite.com>
Date: Mon, 20 Mar 2006 14:27:24 -0800
Links: << >>  << T >>  << A >>
Tim Wescott wrote:

> Roger Bourne wrote:
> 
>> Hello all,
>>
>> Concerning digital filters, particurlarly IIR filters, is there a
>> preferred approach to implementation - Are fixed-point preferred over
>> floating-point calculations ? I would be tempted to say yes. But, my
>> google search results leave me baffled for it seems that floating-point
>> computations can be just as fast as fixed-point.
>> Furthermore, assuming that fixed-point IS the preferred choice, the
>> following question crops up:
>> If the input to the digital filter is 8 bits wide and the coefficents
>> are 16 bits wide, then it would stand to reason that the products
>> between the coefficients and the digital filter intermediate data
>> values will be 24 bits wide. However, when this 24-bit value is to get
>> back in the delay element network (which is only 8 bits wide), some
>> (understatemen) resolution will be lost. How is this resolution loss
>> dealt with? so it will lead to an erroneous filter? 
>> -Roger
>>
> This is a simple question with a long answer.
> 
> Floating point calculations are always easier to code than fixed-point, 
> if for no other reason than you don't have to scale your results to fit 
> the format.
> 
> On a Pentium in 'normal' mode floating point is just about as fast as 
> fixed point math; with the overhead of scaling floating point is 
> probably faster -- but I suspect that fixed point is faster in MMX mode 
> (someone will have to tell me).  On a 'floating point' DSP chip you can 
> also expect floating point to be as fast as fixed.
> 
> On many, many cost effective processors -- including CISC, RISC, and 
> fixed-point DSP chips -- fixed point math is significantly faster than 
> floating point.  If you don't have a ton of money and/or if your system 
> needs to be small or power-efficient fixed point is mandatory.
> 
> In addition to cost constraints, floating point representations use up a 
> significant number of bits for the exponent.  For most filtering 
> applications these are wasted bits.  For many calculations using 16-bit 
> input data the difference between 32 significant bits and 25 significant 
> bits is the difference between meeting specifications and not.
> 
> For _any_ digital filtering application you should know how the data 
> path size affects the calculation.  Even though I've been doing this for 
> a long time I don't trust to my intuition -- I always do the analysis, 
> and sometimes I'm still surprised.
> 
> In general for an IIR filter you _must_ use significantly more bits for 
> the intermediate data than the incoming data.  Just how much depends on 
> the filtering you're trying to do -- for a 1st-order filter you usually 
> to do better than the fraction of the sampling rate you're trying to 
> filter, for a 2nd-order filter you need to go down to that fraction 
> squared*.  So if you're trying to implement a 1st-order low-pass filter 
> with a cutoff at 1/16th of the sample rate you need to carry more than 
> four extra bits; if you wanted to use a 2nd-order filter you'd need to 
> carry more than 8 extra bits.
> 
> Usually my knee-jerk reaction to filtering is to either use 
> double-precision floating point or to use 32-bit fixed point in 1r31 
> format.  There are some less critical applications where one can use 
> single-precision floating point or 16-bit fractional numbers to 
> advantage, but they are rare.
> 
> * There are some special filter topologies that avoid this, but if 
> you're going to use a direct-form filter out of a book you need fraction^2.
> 
Oops -- thought I was responding on the dsp newsgroup.

Everything I said is valid, but if you're contemplating doing this on an 
FPGA the impact of floating point vs. fixed is in logic area and speed 
(which is why fast floating point chips are big, hot and expensive). 
Implementing an IEEE compliant floating point engine takes a heck of a 
lot of logic, mostly to handle the exceptions.  Even if you're willing 
to give up compliance for the sake of speed you still have some 
significant extra steps you need to take with the data to deal with that 
pesky exponent.  I'm sure there are various forms of floating point IP 
out there that you could try on for size to get a comparison with 
fixed-point math.

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Posting from Google?  See http://cfaj.freeshell.org/google/

Article: 99147
Subject: Re: FPGA FIR advice
From: Jan Panteltje <pNaonStpealmtje@yahoo.com>
Date: Mon, 20 Mar 2006 22:40:34 GMT
Links: << >>  << T >>  << A >>
On a sunny day (20 Mar 2006 10:59:37 -0800) it happened "Isaac Bosompem"
<x86asm@gmail.com> wrote in
<1142881177.169633.276610@i39g2000cwa.googlegroups.com>:

>
>Allan Herriman wrote:
>> On 20 Mar 2006 07:41:37 -0800, "Isaac Bosompem" <x86asm@gmail.com>
>> wrote:
>>
>> >
>> >John_H wrote:
>> >> Isaac Bosompem wrote:
>> >> > Hi Ray and Peter,
>> >> >
>> >> > I am sorry for hijacking your thread Roger , but I think my question is
>> >> > relevant.
>> >> >
>> >> > I was thinking of using about 8 FIR (bandpass) filters in parallel to
>> >> > create a graphic equalizer. Now I know there are some phase problems
>> >> > with this method but it seems to me like a very logical way to go about
>> >> > this problem. I was wondering if you guys know of any better methods?
>> >> >
>> >> > I also was thinking of using 16 taps each.
>> >> >
>> >> > 320 FF's is not a lot actually. My XC3S200 (which is probably dirt
>> >> > cheap) has almost 4000 FF's. Enough for your filter network and much
>> >> > more.
>> >> >
>> >> > -Isaac
>> >>
>> >> Another great advantage to FPGA FIRs: most of the time the FIRs are
>> >> symmetric which allows half the taps (half the multipliers) to implement
>> >> the full FIR by adding t-d to t+d before multiplying by the common
>> >> coefficient, the implementation is more elegant.
>> >
>> >Hi John,
>> >
>> >I cannot see what you mean? Can you offer a quick example?
>>
>> An FIR filter is implemented as a dot product of a constant vector and
>> a vector made up of the input samples delayed,
>>
>> y[n] = c[0].x[n] + c[1].x[n-1] + ... c[m-1].x[n-m+1]
>> for an m tap filter.
>>
>> The c[n] are the coefficients.  If the filter has a linear phase
>> response, the coefficients are symmetric, so
>> c[0] = c[m-1], c[1] = c[m-2], etc.
>>
>> We can group the expression for y[n] as follows:
>>
>> y[n] = c[0].(x[n] + x[n-m+1) + c[1].(x[n-1] + x[n-m+2]) + ...
>>
>> This has (roughly) halved the number of multipliers.
>> I say roughly, because m is often odd.
>>
>> Regards,
>> Allan
>
>Ahh, I see, thanks.
>
>Do you guys know of a good filter design software, I have an old one
>for DOS, but it is quite difficult to use (Also I do not have access to
>MATLAB).
http://www.mediatronix.com/FIRTool.htm
Also runs in Linux windows emulator wine.

There are several more, remez (Linux), but this one is free and has usable GUI,
not a lot of silly limitations.
Have only played with it a little bit, but maybe you like it.



Article: 99148
Subject: Re: Xilinx programming cable; Linux notebook w/o parallel port; Am I doomed?
From: Jan Panteltje <pNaonStpealmtje@yahoo.com>
Date: Mon, 20 Mar 2006 22:52:48 GMT
Links: << >>  << T >>  << A >>
On a sunny day (20 Mar 2006 13:07:34 -0800) it happened
"GHEDWHCVEAIS@spammotel.com" <GHEDWHCVEAIS@spammotel.com> wrote in
<1142888854.711945.278530@z34g2000cwc.googlegroups.com>:

>cs_posting@hotmail.com wrote:
>> Gues you'll just have to make your own.
>>
>> Find yourself a (linux compatible) USB-enabled microcontroller and
>> treat the problem as one of in circuit configuration of an FPGA by a
>> micro.
>>
>> You'll get higher rates if you let the micro do the timing and feed it
>> buffered data through the USB.  Trying to bit-bang it through the USB
>> is possible, but really slow given the higher latency of things like
>> USB-to-parallel chips vs old parallel ports.
>
>I am not sure whether I understand that right. Are you talking about a
>general USB-to-parallel adapter that would work together with parallel
>cabel IV in connection with impact? Or is there another way to solve
>that?
>
>I have been thinking about a USB-to-parallel adapter.
>
>One thing I would like to avoid is having to program my own driver and
>preferable I would like to use an available virtual COM port driver.
>However, as I understand it, impact does not recognize those.
I am a bit anti-usb (because I still have not got to grips with those drivers
100% perhaps), but there exists a PCMCIA to jtag plugin module for tv hacking.
Google PCMCIA to JTAG shows a lot.
And then perhaps address the PCMCIA directly as IO?
If you could use the par cable 3 JTAG soft (as Digilent uses to program the D2)
maybe it would even be possible to connect that PCMCIA connector directly to that
cable..
Have not tried.
Memory mapped, make a simple interface?

Article: 99149
Subject: Re: Spartan-3E Sample Pack
From: ziggy <ziggy@fakedaddress.com>
Date: Mon, 20 Mar 2006 23:08:11 GMT
Links: << >>  << T >>  << A >>
In article <1142816051.442711.255950@i40g2000cwc.googlegroups.com>,
 "Peter Alfke" <alfke@sbcglobal.net> wrote:

> In the bottom left corner are two flags.
> If you do not mind the Union Jack of the colonial past,, I have clicked
> it for you.
> 
> http://www.simple-solutions.de/en/products/index.php
> 
> Peter Alfke

Strange, i didnt see it the first time.. but now i do..



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search