Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 96450

Article: 96450
Subject: Re: Looking for literature on microprogrammed machines
From: Philip Freidin <philip@fliptronics.com>
Date: Fri, 03 Feb 2006 21:08:32 GMT
Links: << >>  << T >>  << A >>
On 3 Feb 2006 11:29:12 -0800, "Paul Marciano" <pm940@yahoo.com> wrote:
>
>I am interested in studying the implementation of simple
>microprocessors and microprogrammed/microcoded machines and would like
>some literature pointers.
>
>I still have my university text, "The Architecture of Microprocessors"
>by Francois Anceau... but I find it as hard to read now as I did back
>in '88.
>
>Any recommended texts?
>
>Thanks,
>Paul.

This online book should answer most of your questions

   http://www10.dacafe.com/book/parse_book.php?article=BITSLICE/index.html

and

   http://www.donnamaie.com/BITSLICE/controller-index.html



Philip Freidin



Philip Freidin
Fliptronics

Article: 96451
Subject: Re: BGA central ground matrix
From: Austin Lesea <austin@xilinx.com>
Date: Fri, 03 Feb 2006 13:19:05 -0800
Links: << >>  << T >>  << A >>
Jim,

It has to do with current creating a magnetic field, and how the 
magnetic fields interact.

Imagine I have a rectangular loop (tall and skinny), divided down the 
middle by a sheet of glass.

On either side of the glass I have a scale (made of plastic) to  see how 
much the wire pulls away from the glass as the current increases in the 
loop.

At some point, I add a third wire on one side of the glass in parallel. 
  It is some distance away from the glass, more so that the first set of 
wires.

What I claim is that the force of the third added wire will be less than 
that of the first wire, and the force of the first on the same side of 
the glass wire will be somewhat less, but will not be 1/2.  In fact with 
  the BART rail spacing, it would be 2/3 and 1/3.

At DC.

Guess what?  Current  creates a field, a field tells current how to flow.

I think Faraday discovered this?

This works by the way for superconducting wires, resistance has no part 
in this.  R does not appear in the equations to show this is true.

QED for this "Gendanken" Experiment...

Austin

Jim Granville wrote:

> Austin Lesea wrote:
> 
>> Paul,
>>
>> The latter (b).
>>
>> They do carry current, but it is falling off as 1/r or 1/r^2 (I just 
>> can't remember which).
>>
>> The BART rails had 2/3 nearest the power rail, and 1/3 in the rail 
>> furthest.  Which makes me think it was 1/r, not 1/r^2.
>>
>> Also, BART has shorting links every X meters that ties the two rails 
>> together (now) to lessen the return resistance (improve efficiency).
> 
> 
> Hmmm....
> 
>> I think I was told that the inner 2X2 balls had 1/8 to 1/16 the 
>> current...but it may have been more (or less).
> 
> 
> Hmmmm....
> 
>> As I already said, I will post some results (when I find them).
> 
> 
> Please do, we can agree there is an effect, my antennae just question
> how much of an effect at DC ?.
> 
>  You still have to satisfy ohms law, so any push effects that favour
> flow, have to model somehow as mV(uV) generators....
>  To skew Ball DC currents 7/8 or 15/16, frankly sounds implausible, and 
> maybe the models there forgot to include resistance balancing effects ?
>  [ ie do not believe everything you are 'told' ]
> 
> -jg
> 
> 
> 
> 

Article: 96452
Subject: Re: FPGA growth vs. ASIC growth
From: langwadt@ieee.org
Date: 3 Feb 2006 13:32:26 -0800
Links: << >>  << T >>  << A >>

Paul Johnson wrote:
> On Fri, 03 Feb 2006 00:52:20 -0800, Caleb Leak <dornif@gmail.com>
> wrote:
>
> > I am trying to show the gap
> >narrowing between these two over time.
>
> Why would it narrow? At first sight, it should stay constant. For a
> given FPGA technology, the differences are simply due to the extra die
> resources required to implement a programmable feature as opposed to a
> fixed feature. This difference should stay constant as both the FPGA
> and ASIC move to new processes.

technically it may not change, but in practice it might for some
applications.
Unless the ASIC volume is huge I may not make sense to spend the huge
NRE to
get in the lastest process (what's the price of a 90nm mask set?
500K$?)
FPGAs are generic so they have the volume to take advantage of the
newest process.

-Lasse


Article: 96453
Subject: Re: FPGA growth vs. ASIC growth
From: Austin Lesea <austin@xilinx.com>
Date: Fri, 03 Feb 2006 14:04:43 -0800
Links: << >>  << T >>  << A >>
Lasse,

500K$?

What a deal.  Where can you get a 90nm mask set for that price?

We have seen studies that development and productization of a 90nm chip 
costs from 70 to 180 Million $ (US).

I can not comment on how much it costs Xilinx, but if you look at our 
R&D spending as a percentage of revenues from our public stock and 
business reports, those numbers are in the right magnitude range.

Like they say about Opera, it is all over when the fat lady sings.  So 
it is with ASICs "heyday".  The singing is over.  The growth is entirely 
negative.  The ASIC starts are shrinking by a factor of ten per year.

Now ASSPs are still doing very well, as a standard product still is a 
money maker, even with these high development costs.  And a standard 
product costs a lot less when the die is 1/2 the area in the latest 
technology....

Austin

Article: 96454
Subject: Re: FPGA growth vs. ASIC growth
From: Paul Johnson <abuse@127.0.0.1>
Date: Fri, 03 Feb 2006 22:07:35 +0000
Links: << >>  << T >>  << A >>
On 3 Feb 2006 09:25:41 -0800, "Peter Alfke" <peter@xilinx.com> wrote:

>Not too many potential ASIC users can afford to invest $50 M or 200
>man-years in the development of a state-of-the-art ASIC

To be fair, almost nobody spends these figures. You can get into the
ASIC game for as little as $100K in NRE plus, say, $50K in tools. All
those graphs I've ever seen showing a cross-over point at 10's of K
pieces are complete nonsense. These are real numbers for a .11um
structured ASIC.

>I suggest to look at the Spartan-3 family, our highest-running 90-nm
>family. Few designers would call Spartan-3 "large, expensive, and low
>yield,,,and not useful for most customers"

Well, no, and nor would I; I like them. If you want to know what I
mean try to get sensible quotes and lead times on an XC4VFX20,
XC4VFX40 or XC4VLX80 (not random choices - three I've tried to get
quotes on recently).

>ASICs are for extreme applications: extreme quantity, extreme
>complexity, extreme speed, and extremely low power.

Not exactly fair. I've spent a lot of time looking (and doing), and my
rule of thumbs are, more or less:

1) FPGAs top out at about 750K 'real' gates, and about 40-80MHz real
system speeds. Anything beyond this is too expensive (unless you've
got *really* small volumes) or too difficult. Yes, I know everybody is
going to argue about the exact figures.

2) 1M - ~4M 'real' gates, 75-250MHz, ~1500 pcs/yr for 3 years =>
structured ASIC, no brainer

3) 4M+ gates, 300MHz+, ~3K pcs/yr for 3 years => standard cell

4) If you're going to spend $500K over 3 years (NRE + device costs)
then the structured ASIC vendors will talk to you

5) If you're going to spend $1M over 3 years (NRE + device costs) then
the standard cell vendors will talk to you

Of course, for options 2- 5 you need to buy real tools, and you need
to know how to verify.

>But then I admit to being biased...

Indeed...  :)

PS - if anybody out there wants to subcontract a structured ASIC
conversion, reply and I'll send you a real email address. Sorry,
Peter...  :)


Article: 96455
Subject: Re: Sharing BRAM between Xilinx PowerPC's (on data-OCM ports)
From: "Jeff Shafer" <shafer@delete-to-reply.aquaweb.pair.com>
Date: Fri, 3 Feb 2006 16:08:12 -0600
Links: << >>  << T >>  << A >>
Thanks for the idea Sylvain.  That would work for the test program I 
described, but that was written only to illustrate the problem.  In reality, 
we'd like to have this shared scratchpad readable and writable by both 
PowerPCs.   So while registering and inverting the clock would work for the 
writes, I think it would corrupt reads by that processor....

Thanks,

Jeff


"Sylvain Munaut" <com.246tNt@tnt> wrote in message 
news:43e3b444$0$13886$ba620e4c@news.skynet.be...
> As I understand, you use 1 port always for read and another always for
> write.
>
> So you could clock the write port on the negative edge even while
> keeping the two processors and their plb bus on the same clock. Let's
> says clk is your main clock. Instead of feeding di,dip and wren directly
> to the BRAM, register them in the clk domain, then connect the output of
> these registers to the respective port of the BRAM and feed "not clk" to
> the wrclk ping of the BRAM.
>
>
> Sylvain 



Article: 96456
Subject: Re: Looking for literature on microprogrammed machines
From: "Paul Marciano" <pm940@yahoo.com>
Date: 3 Feb 2006 14:08:32 -0800
Links: << >>  << T >>  << A >>

Philip Freidin wrote:
> This online book should answer most of your questions
>
>    http://www10.dacafe.com/book/parse_book.php?article=BITSLICE/index.html
> and
>    http://www.donnamaie.com/BITSLICE/controller-index.html

Thanks Philip - that's exactly what I was looking for.

Regards,
Paul.


Article: 96457
Subject: Re: Looking for literature on microprogrammed machines
From: Paul Johnson <abuse@127.0.0.1>
Date: Fri, 03 Feb 2006 22:14:06 +0000
Links: << >>  << T >>  << A >>
On 3 Feb 2006 11:29:12 -0800, "Paul Marciano" <pm940@yahoo.com> wrote:

>
>I am interested in studying the implementation of simple
>microprocessors and microprogrammed/microcoded machines and would like
>some literature pointers.
>
>I still have my university text, "The Architecture of Microprocessors"
>by Francois Anceau... but I find it as hard to read now as I did back
>in '88.
>
>Any recommended texts?

I don't know if you can buy it any more, but the bible was 'Bit-slice
microprocessor design', Mick and Brick, 0-07-041781-4. Very easy
reading, very practical. For the theoretical stuff you can get
Hennessy & Patterson, though you probably wont need to read any of it.

Article: 96458
Subject: Re: Looking for literature on microprogrammed machines
From: "Paul Marciano" <pm940@yahoo.com>
Date: 3 Feb 2006 14:34:49 -0800
Links: << >>  << T >>  << A >>
Paul Johnson wrote:
> On 3 Feb 2006 11:29:12 -0800, "Paul Marciano" <pm940@yahoo.com> wrote:
> I don't know if you can buy it any more, but the bible was 'Bit-slice
> microprocessor design', Mick and Brick, 0-07-041781-4. Very easy
> reading, very practical.

Thanks Paul - I found it used at Amazon.

> For the theoretical stuff you can get
> Hennessy & Patterson, though you probably wont need to read any of it.

Got that one already, but I'd be lying if I said I'd read much of it
;-)

Thanks again,
Paul.


Article: 96459
Subject: Re: FPGA growth vs. ASIC growth
From: "Peter Alfke" <peter@xilinx.com>
Date: 3 Feb 2006 14:35:27 -0800
Links: << >>  << T >>  << A >>
The big dedicated players, LSI Logic, Xilinx, Altera, Lattice, Actel,
et.al. are all publicly traded companies.
If you want to check on their success, just watch their relative stock
performance.
The beauty of capitalism: hot air does not help, the Bottom Line speaks
clearly.
Peter Alfke


Article: 96460
Subject: Re: How will synthesizers handle these statements?
From: "Rob Dekker" <rob@verific.com>
Date: Fri, 03 Feb 2006 22:56:34 GMT
Links: << >>  << T >>  << A >>
Hi Frank,

The tests that you wrote will create what is called "common sub-expressions".
These are typically eliminated very early in the flow through synthesis.

So, I am convinced that you are free to write the same tests in different processes,
and it should not affect the synthesis result.

Just try to keep them the same as much as possible, so that common subexpression
elimination is guaranteed to work.


Rob



"Frank" <frank.invalid@hotmail.com> wrote in message news:43e176bb$1@news.starhub.net.sg...
>I put these conditions in different always and if-else-if statements, will
> design compiler & ISE be smart enough to recognise them and reduce
> hardware cost accordingly?
>
> I had a tendency to write the conditions with a wire & assign statement
> e.g.:
> wire cond1; assign cond1 = pop && (process == 8'h25) || kick;
> but if synthesizers handles these, then it will save me some thinking.
>
>
>
>
>
>
> always @ (posedge clk)
> begin
> if (pop && (process == 8'h25) || kick)
>    whatever <= asdf;
> else if (pop1 && (process == 8'h25) || kick1)
>    whatever <= asdf1;
> else if (pop2 && (process == 8'h25) || kick2)
>    whatever <= asdf2;
> end
>
> always @ (posedge clk)
> begin
> if (pop && (process == 8'h25) || kick)
>    whatever1 <= asdf3;
> else if (pop1 && (process == 8'h25) || kick1)
>    whatever1 <= asdf4;
> else if (pop3 && (process == 8'h25) || kick3)
>    whatever1 <= asdf5;
> end
>
>
>
> 



Article: 96461
Subject: Re: FPGA growth vs. ASIC growth
From: Paul Johnson <abuse@127.0.0.1>
Date: Fri, 03 Feb 2006 23:12:28 +0000
Links: << >>  << T >>  << A >>
On Fri, 03 Feb 2006 14:04:43 -0800, Austin Lesea <austin@xilinx.com>
wrote:

>Lasse,
>
>500K$?
>
>What a deal.  Where can you get a 90nm mask set for that price?

Have you actually got a quote for a 90nm mask set? Can you find a real
number on the web for a real mask price? I'm willing to bet the answer
to both of those is no. $500K was the standard figure going around the
web 5 years ago, when 90nm was new. And how many masks did that cover?
35? How many masks does the average ASIC developer pay for? 5?

>We have seen studies that development and productization of a 90nm chip 
>costs from 70 to 180 Million $ (US).
>I can not comment on how much it costs Xilinx, but if you look at our 
>R&D spending as a percentage of revenues from our public stock and 
>business reports, those numbers are in the right magnitude range.

According to Gartner Dataquest, 20% of all design starts in the
Americas are now at 90nm. Correct me if I'm wrong, but my guess is
that Xilinx is responsible for about 1% of design starts in the
Americas. The other guys are not paying 70 - $180M.

>Like they say about Opera, it is all over when the fat lady sings.  So 
>it is with ASICs "heyday".  The singing is over.  The growth is entirely 
>negative.  The ASIC starts are shrinking by a factor of ten per year.

Austin, that's just nonsense. No-one has the real numbers, but there
is general agreement - IIRC, it's about a year since I collected the
figures - that starts have dropped from ~10k/year to a figure of
between 2K and 4K/year, *over 5 years*. The analyst who came came up
with the 2K figure also ignored ASSPs, which makes no sense. ASSPs
*are* ASICs. He's also ignored the fact that ASIC starts are clearly
going to decrease because people roll a number of existing devices
into one larger device. The figures were also compiled at a time when
no-one was going into 90nm because of the complexity. See any number
of commentaries by Gartner. Everyone knows that the real revenue is in
ASICs, not FPGAs. Gartner also says that the ASIC market *grew* 11% in
2004. 

>Now ASSPs are still doing very well, as a standard product still is a 
>money maker, even with these high development costs.  And a standard 
>product costs a lot less when the die is 1/2 the area in the latest 
>technology....

FPGAs form a small part of the totality of ASSPs. IIRC, there are
currently about 2000 ASSP design starts a year. And yes, of course,
they all get better in new technologies.

Come on, guys, this is not a Xilinx marketing forum. Perhaps you could
credit us with a little more intelligence.


Article: 96462
Subject: fpga hardware "breakpoint"
From: shawnn@gmail.com
Date: 3 Feb 2006 15:24:49 -0800
Links: << >>  << T >>  << A >>
Hello,

When doing development using microcontrollers/processors, you can often
find ICEs and ICDs that allow you to set breakpoints. You can stop the
code in execution and view the contents of registers, state of input
pins, etc.

Suppose I want to do something similar with an FPGA-based design. What
are my options? I know I can output internal signals to output pins and
sniff them using a logic analyzer, but I'm hoping there is a more
elegant solution. I'd like to stop everything at some point and view
all inputs, outputs, registers, etc.

Can someone point me in the right direction?


Article: 96463
Subject: Re: Sharing BRAM between Xilinx PowerPC's (on data-OCM ports)
From: Sylvain Munaut <com.246tNt@tnt>
Date: Sat, 04 Feb 2006 00:34:46 +0100
Links: << >>  << T >>  << A >>
Jeff Shafer wrote:
> Thanks for the idea Sylvain.  That would work for the test program I 
> described, but that was written only to illustrate the problem.  In reality, 
> we'd like to have this shared scratchpad readable and writable by both 
> PowerPCs.   So while registering and inverting the clock would work for the 
> writes, I think it would corrupt reads by that processor....

It depends of the controller. If you can modify it to tolerate more
pipeline (i.e. the read data doesn't appear at T+1 but T+2 (or T+3,
depdnds if your timing margin requires u to re-register dout or not)).

But the two processors are not equal ... one has less latency ...


> Thanks,
> 
> Jeff
> 
> 
> "Sylvain Munaut" <com.246tNt@tnt> wrote in message 
> news:43e3b444$0$13886$ba620e4c@news.skynet.be...
> 
>>As I understand, you use 1 port always for read and another always for
>>write.
>>
>>So you could clock the write port on the negative edge even while
>>keeping the two processors and their plb bus on the same clock. Let's
>>says clk is your main clock. Instead of feeding di,dip and wren directly
>>to the BRAM, register them in the clk domain, then connect the output of
>>these registers to the respective port of the BRAM and feed "not clk" to
>>the wrclk ping of the BRAM.
>>
>>
>>Sylvain 
> 
> 
> 

Article: 96464
Subject: Re: FPGA growth vs. ASIC growth
From: Austin Lesea <austin@xilinx.com>
Date: Fri, 03 Feb 2006 15:56:21 -0800
Links: << >>  << T >>  << A >>
Paul,

I see you make up your rules too as you go along.

Credit me with knowing what I know, and I'll credit you with what you know.

ASIC starts are decreasing.  Fact.

FPGA "starts" are increasing.  Also a fact.

By how much, and when, is variable depending on who you quote.

My opinion is it is going down the tubes pretty fast.  Your opinion is 
that it is not (if you include ASSP's).

I don't include ASSP's, as they are not our primary competition.  We are 
making inroads into embedded computing, and extreme DSP, as well as 
taking more sockets that would have been ASICs (if the customer could 
afford it).

I have had customers come to visit.  They say "we can't afford to make 
ASICs any longer, we need to learn how to use your FPGAs."  These are 
not little companies.  These a multinational corporations with sites 
around the world.

My job is tougher because now I have to explain to folks who used to 
design ASICs for a living.  They KNOW all of the terrible ultra deep 
submicron potholes.  And they (sometimes) do not give us any credit for 
having also experienced all the potholes, and already gotten around 
them.  They somehow feel that making FPGas is easy.  Well, many have 
tried, and most have failed.  Must be trivial, right?

Austin

Article: 96465
Subject: Re: FPGA growth vs. ASIC growth
From: "Peter Alfke" <peter@xilinx.com>
Date: 3 Feb 2006 15:59:38 -0800
Links: << >>  << T >>  << A >>
Paul, don't get offended. You were the one who stoked the fire.
The OP asked for a technical comparison between FPGAs and ASICs. And I
interpret that as a comparison between customer-specific designs
(Leaving out microprocessors, and ASSP, since they serve a different
market requirement. If uPs and/or ASSPs fill the need, anybody would be
a fool for not using them.)
So the comparison here is only between two different ways to achieve a
custom hardware solution: FPGAs or ASICs.
The relative merits have been described ad nauseam. My point was that
you cannot discuss this without mentioning economics. And economics
more and more favor FPGAs, as Moore's Law drives all of us to smaller
geometries. This may sound like Xilinx Marketing, but it also happens
to be a fact.
The ASIC market is still big, but relative to FPGAs it is shrinking,
especially when you look at the number of new design starts.
There will alays be a market for both methods, but the old religion of
"Real men do ASICs" is fading, similar to Jerry Sanders' "Real men have
fabs". We have all got smarter with time.
Peter Alfke


Article: 96466
Subject: Re: fpga hardware "breakpoint"
From: "Peter Alfke" <peter@xilinx.com>
Date: 3 Feb 2006 16:01:38 -0800
Links: << >>  << T >>  << A >>
Search the Xilinx website for "Readback"
Peter Alfke


Article: 96467
Subject: Re: fpga hardware "breakpoint"
From: Mike Treseler <mike_treseler@comcast.net>
Date: Fri, 03 Feb 2006 16:37:40 -0800
Links: << >>  << T >>  << A >>
shawnn@gmail.com wrote:

> When doing development using microcontrollers/processors, you can often
> find ICEs and ICDs that allow you to set breakpoints. You can stop the
> code in execution and view the contents of registers, state of input
> pins, etc.
> Suppose I want to do something similar with an FPGA-based design. What
> are my options? I know I can output internal signals to output pins and
> sniff them using a logic analyzer, but I'm hoping there is a more
> elegant solution. I'd like to stop everything at some point and view
> all inputs, outputs, registers, etc.


If you learn an HDL, modelsim will allow
you to trace your code and set breakpoints
during a simulation. Much more elegant
than a logic analyzer.

         -- Mike Treseler

Article: 96468
Subject: Re: FPGA ogg Vorbis/Theora player
From: Eric Smith <eric@brouhaha.com>
Date: 03 Feb 2006 16:47:05 -0800
Links: << >>  << T >>  << A >>
"urielka" <uriel.katz@gmail.com> writes:
> i am quite a newbie in FPGA development.
> my idea is build a SoC(System on Chip) which will be a Video/Audio
> player based on FPGA(Xilinx Spartan 3E 1M gate count).

i am quite a newbie in structural engineering.
my idea is build a suspension bridge which will have ten lanes
of traffic over grand canyon.

> from what i understand,doing a full ogg decoder on chip is madness so
> what i have to do i build coproccessors that will do most of
> clock-expensive and the software will use those coprocessors,right?

You have to read the various Ogg specifications and figure out what
parts those are.

> the storage will be a CF card which are damn cheap and can work as
> IDEs.
> the part i want to know if this is posible to be done in a FPGA.

The CF interface is definitely possible in an FPGA.  As for the rest,
probably yes, but it's not exactly the sort of project that is
recommended for a newbie.

> i could move all the ogg vorbis decoder to a ASIC vorbis decoder(they
> exsist :) ) so saving the time about implenting the ogg vorbis format.

Good idea.

> what you think,it is posible? which soft cpu(s) should i use,

Depends on what your performance and cost requirements are.

> which things the cpu need to preform real fast?

That's something you need to research.

> is it posible on a
> Spartan(development boards for spartan are damn cheap).

Spartan 3 or 3e should be good.  A Virtex 4 offers a lot more
processing power, and the FPGA fabric is faster, but it costs a
lot more.

> if it is important to someone the desgin(the final one) will be powered
> with a LiPo battery :)

Not for very long.

I would think that for this application you'd be much better off using
an existing ARM SoC.

Article: 96469
Subject: Re: FPGA growth vs. ASIC growth
From: cs_posting@hotmail.com
Date: 3 Feb 2006 17:01:55 -0800
Links: << >>  << T >>  << A >>
Austin Lesea wrote:

> ASIC starts are decreasing.  Fact.
>
> FPGA "starts" are increasing.  Also a fact.
>
> By how much, and when, is variable depending on who you quote.
>
> My opinion is it is going down the tubes pretty fast.  Your opinion is
> that it is not (if you include ASSP's).

A future of SRAM-FPGA based commodity consumer products would put the
hobbyist/hacker community in heaven...

...but I doubt it's likely to happen

(note, I'm not saying people would alter the manufacturer's bitstreams
- rather they'd create new designs using that board, either in the
spirit of the original product function or something totally different)


Article: 96470
Subject: Re: Xilinx: generic tristates and multiplexers
From: Brian Drummond <brian_drummond@btconnect.com>
Date: Sat, 4 Feb 2006 01:16:24 +0000 (UTC)
Links: << >>  << T >>  << A >>
On 3 Feb 2006 09:12:06 -0800, "JL" <kasty.jose@gmail.com> wrote:

>Thanks Brian, that works. Just one hint: the line driving the bus to
>'H' must be enclosed between "synopsys translate_off" and "synopsys
>translate_on" metacommands. Like this:

A very good idea. (Of course I thought about it just after posting)

>If you fail to keep the extra code away from synthesis, Xilinx XST will
>complain about many sources driving a wire.

To some extent, this is a tools issue. I seem to recall Leonardo coping
with this, as it should, but your suggestion is good practice anyway.

- Brian

Article: 96471
Subject: Re: Back to max thermal and power for XC4VLX200's
From: "Marc Randolph" <mrand@my-deja.com>
Date: 3 Feb 2006 20:58:11 -0800
Links: << >>  << T >>  << A >>

Austin Lesea wrote:
> Marc,
>
> I was only talking about flip chip.
>
> The die vias, metal, bumps, planes, vias and balls are all metal.  They
> conduct heat very well.  They are very shot lengths.  The epoxy and pcb
> material also is a great conductor of heat.
>
> The grease and copper top is good, but not as good.  Especially after
> 800 microns of glass (the backside of the die).  Further to go.  And
> then one more interface to the air (teriible) or to a heatsink (also may
> not be very good).

Austin,

I understand now.  Thanks for the response - among other things, the
glass on the backside of the die was something I overlooked.

Thank you,

   Marc


Article: 96472
Subject: core generator
From: "CMOS" <manusha@millenniumit.com>
Date: 3 Feb 2006 21:11:54 -0800
Links: << >>  << T >>  << A >>
hi,
im in the process of implementing the open risc 1000 processor. i have
the ISE 7.1 webpack, and it does not contain the core generator. im
following a tutorial that guids me in implementing the procesor on the
FPGA. however the tutorial uses core generator to build a onchip ram
component. im stuck there because i dont have the core generator.
pls let me know whether it is posiible to download core generator for
free ( i tried but failded) or any othyer alternatives to use core
generator.

Tnak you


Article: 96473
Subject: Re: core generator
From: Paul Hartke <phartke@Stanford.EDU>
Date: Fri, 03 Feb 2006 21:35:55 -0800
Links: << >>  << T >>  << A >>
Webpack version 8.1 now includes Coregen:
http://www.xilinx.com/ise/logic_design_prod/webpack.htm

Paul

CMOS wrote:
> 
> hi,
> im in the process of implementing the open risc 1000 processor. i have
> the ISE 7.1 webpack, and it does not contain the core generator. im
> following a tutorial that guids me in implementing the procesor on the
> FPGA. however the tutorial uses core generator to build a onchip ram
> component. im stuck there because i dont have the core generator.
> pls let me know whether it is posiible to download core generator for
> free ( i tried but failded) or any othyer alternatives to use core
> generator.
> 
> Tnak you

Article: 96474
Subject: Re: Parallel Cable IV does not work with parallel to usb cable
From: "Antti Lukats" <antti@openchip.org>
Date: Sat, 4 Feb 2006 08:31:38 +0100
Links: << >>  << T >>  << A >>
"antonio bergnoli" <bergnoli@pd.infn.it> schrieb im Newsbeitrag 
news:43e2789f$2_3@x-privat.org...
> Ray Andraka ha scritto:
>> Sean Durkin wrote:
>>
>>> Marco T. wrote on 01.02.2006 09:56:
>>>
>>>> Do you know a all-in-one port replicator with usb, serial and ps/2 
>>>> connectors that works with Parallel Cable IV?
>>>
>>>
>>> Haven't been able to find one of those either... The problem seems to be
>>> that iMPACT/Chipscope don't recognize the "virtual" LPT-ports those port
>>> replicators usually provide...
>>> There are parallel-port-controllers for Cardbus/PCMCIA you can plug in
>>> to get a "real" parallel port on your laptop, but I haven't tried any of
>>> those, so I can't comment on how good they are.
>>> The problem is the chipset: to get decent programming speeds, the
>>> parallel port should support 2MHz or 5MHz operation. All
>>> PCI-plugin-cards I've seen in stores lately use the same cheap
>>> controller-chip that doesn't support operation above 1MHz, so the cable
>>> will work in compatibility mode and drop down to 200kHz.
>>>
>>> Instead, I suggest buying a Platform USB cable. Gives you much less
>>> trouble in the long run, and works well on every modern machine.
>>> ... if you can afford it, that is. I think it's $150, so about double
>>> what the parallel cable costs. Plus, I'm not sure if it works under
>>> Linux, but there have been discussions about that here lately.
>>>
>>> cu,
>>> Sean
>>
>>
>> If you are programming through the JTAG interface, you could try the 
>> Diligent USB-JTAG cable which is under $40.
>
> is Diligent USB-JTAG cable compatible  with Impact? or it is necessary to 
> use another software?

not compatible
Antti





Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search