1994 Jul Aug Sep Oct Nov Dec 1994 1995 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 1995 1996 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 1996 1997 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 1997 1998 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 1998 1999 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 1999 2000 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2000 2001 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2001 2002 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2002 2003 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2003 2004 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2004 2005 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2005 2006 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2006 2007 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2007 2008 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2008 2009 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2009 2010 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2010 2011 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2011 2012 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2012 2013 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2013 2014 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2014 2015 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2015 2016 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2016 2017 Jan Feb Mar Apr 2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

# Messages from 800

Article: 800
Subject: RE: FPGA Custom Computing Machine
From: randraka@ids.net
Date: Fri, 3 Mar 95 13:01:09 GMT
Links: << >>  << T >>  << A >>
In Article <linchih.794184341@guitar>
linchih@guitar.ece.ucsb.edu (Chih-chang Lin) writes:
>
>Hi,
>
>  I am looking for the information (call for paper, registration)
>for "FPGA Custom Computing Machine workshop".
>
>
>Chih-chang Lin
>UC, Santa Barbara

FCCM is April 11-14 in NAPA.  More more info refer to the MOSAIC URL page:
http://www.super.org:8000/FPGA/comp.arch.fpga.  You can also email Ken Pocek
at kpocek@sc.intel.com or Peter Athanas at athanas@vt.edu for details.  They
are the co chairs for the workshop.

-Ray Andraka
Chairman, the Andraka Consulting Group
401/884-7930    FAX 401/884-7950
email randraka@ids.net

The Andraka Consulting Group is a digital hardware design firm specializing in
obtaining maximum performance from FPGAs.  Services include complete design,
development, simulation and integration of these devices and the surrounding
circuits.  We also evaluate, troubleshoot and improve existing designs.  Please
call or write for a brochure.


Article: 801
Subject: Re: FPGA Custom Computing Machine
From: kugel@mp-sun6.informatik.uni-mannheim.de (Andreas Kugel)
Date: 3 Mar 1995 14:18:18 GMT
Links: << >>  << T >>  << A >>
get it from http://www.super.org:8000/FPGA/fccm95.html

---

--------------------------------------------------------
Andreas Kugel
Chair for Computer Science V      Phone:(49)621-292-5755
University of Mannheim            Fax:(49)621-292-5756
A5
D-68131 Mannheim
Germany
e-mail:kugel@mp-sun1.informatik.uni-mannheim.de
--------------------------------------------------------


Article: 802
Subject: Needed Price List for XC3000 and XC4000 series from USA
From: Sergey A. Chernyshov <chern@unc.nnov.su>
Date: Fri, 03 Mar 95 19:17:39 +0300
Links: << >>  << T >>  << A >>

Dear Sirs !

I would like to get a Price List on XC3000 and XC4000 fpga series from
vendors in USA.

The second question:  What is price discount for XILINX Hardware GATE ARRAY
being identical counterparts to the FPGAs ?

----
Sergey Chernyshov,
chern@unc.nnov.su


Article: 803
Subject: Re: Limits on on-chip FPGA virtual computing
From: dong@icsl.ee.washinton.edu (Dong-Lok Kim)
Date: 3 Mar 1995 20:48:10 GMT
Links: << >>  << T >>  << A >>
Sami Sallinen (sjs@varian.fi) wrote:
: dong@icsl.ee.washinton.edu (Dong-Lok Kim) wrote:
: >
: > Hi,
: >
: > I happened to read a paper
: > 	"Area & Time Limitations of FPGA-based Virtual Hardware",
: ..

: There might be a significant overhead when compared to a fixed VLSI solution
: both real-estate- and performance-wise, but when you compare trying to implement
: the same features with a software-only solution the odds are reversed.

Yes, but the issue here is whether the "uncommitted logic" on the expensive
real estate of the main  processor can be justified or not. The cost for
adding the FPGA versus VLSI is about 100 times, so it might not be feasible.

But the above paper seems to compare the area-speed cost of FPGA in an unfair
way, i.e., they did not consider the fact that the FPGA section can contain
multiple functions. If the reconfigurability is considered, the area factor
must be scaled accordingly (which they did not).

--
Donglok Kim

ICSL (Image Computing Systems Lab)
Dept. of Electrical Eng., FT-10
University of Washington
Seattle, WA 98195

Phone) 543-1019
FAX)   543-0977


Article: 804
Subject: RE: Limits on on-chip FPGA virtual computing
From: andre@ai.mit.edu (Andre' DeHon)
Date: Sat, 4 Mar 1995 01:11:52 GMT
Links: << >>  << T >>  << A >>

> Subject: Limits on on-chip FPGA virtual computing
> From: dong@icsl.ee.washinton.edu (Dong-Lok Kim)

> I happened to read a paper
>	 "Area & Time Limitations of FPGA-based Virtual Hardware",
>	 Osama T. Albaharna, etc, IEEE International Conference
>	 on Computer Design 1994, pp. 184-189.
> and found a quite interesting fact from it as follows:
>
> The author says they found the FPGA area to implement certain set of circuits
> has overhead of ~100 times compared to the fixed VLSI implementation. Also, the
> delay overhead is ~10 times, so using the FPGA on the same die with a RISC core
> would not be feasible with the current technology. (Please forgive me if
> I am misinterpreting his conclusion).

Interestingly enough, they seem to make a very different conclusion
in "Virtual Hardware and the Limits of Computational Speed-up" [same
authors, ISCAS '94].  Quoting the final paragraph of the paper:

\begin{quote}

Our invetigation indicates that even with these limitations, an
FPGA-based platform can still outperform today's advanced general purpose
processors.  Furthermore, an adaptive platform is a better utilsation of
the extra transistors than the integration of an additional processor which
could only give, at best, a maximum overall speed-up of 2.  Finally, as the
enhancement area increases, the speed advantage over using multiple
processors decreases.

\end{quote}

> Another paper that attempts such an integration of FPGA and a CPU core on the
> same chip was
>	 "DPGA-Coupled Microprocessors: Commodity ICs for the Early 21st
>	 Century", Andre DeHon, IEEE Workshop on FPGAs for Custom Computing
>	 Machines, 1994, pp. 31-39.
> Here, the author just seems to assume that the integration of the FPGA area
> on the same die is feasible (very well phrased, but without any proof).

Thanks :-)  Hmm, depends on what you call feasible....

I'm not aware of any technical limitations which make it difficult
to integrate the two on the same die.  After all, many high-end processors
and FPGAs are run on the same fab lines.

Of course, there still remains the question of whether or not it is
profitable or beneficial.  As both I and Albaharna et. al. summarize, there
is plenty of point evidence to suggest that mixed FPGA+uP systems
can outperform conventional, uP only systems on a variety of tasks with
small cost additions (simply put, more than double the performance at less
than half the cost).
One thing to note here is that the overhead factors quoted from the
earlier Albaharna et. al. paper assume you build the *same* circuitry on
the FPGA as you do in custom logic.  The advantage of the FPGA is that you
can build much more specialized circuitry which is tailored to a particular
problem.  In a general purpose architecture you don't get to build exactly
the circuitry which would make a particular application (with a particular
dataset) fast, you have to build circuitry which does a decent job across
a wide range of applications.  Conversely, when implementing a routine in
an FPGA to accelerate an application, you don't build exactly the circuitry
which the fixed procesing unit provides, you build exactly the circuitry
which is most beneficial for to the problem.
beneficially employ the FPGA, you need to gain enough from function
specialization in the FPGA to offset the overheads associated with using
field programmable logic instead of custom, fixed logic.  In fact, it is
because of this tradeoff that designs which mix reconfigurable and fixed
function logic on the same die are attractive.
Understanding and quantifying this tradeoff and the regions of
benefit more percisely is partially where the research lies.

You might also want to checkout:

"A High-Performance Microarchitecture with Hardware-Programmable
Functional Units" in Micro-27 by Rahul Razdan and Michael D. Smith.

They take a particularly restricted view of how one might integrate
FPGA-style logic into a processor (in order to make mapping and
experimentation simple) and how it might be exploited.  Despite
the restrictions placed, they still find that the incorporation of the
programmable logic is a profitable use of silicon area.

> I just wonder if any of you want to discuss about the limits of this
> idea (i.e., on-chip FPGA + CPU core) and the above papers. I can provide
> further information about the papers if you want. (Do the authors read this
> news group?)

"DPGA-Coupled Microprocessors: Commodity ICs for the Early 21st is
available as:

HTML: http://www.ai.mit.edu/projects/transit/tn100/tn100.html
PS:   ftp transit.ai.mit.edu:papers/dpga-proc-fccm94.ps.Z
slides: ftp transit.ai.mit.edu:slides/dpga+proc-fccm94.ps.Z

Andre'


Article: 805
Subject: Re: Power gain when moving from FPGA to Gate Array
From: mjodalfr@nmia.com (mjodalfr)
Date: Fri, 3 Mar 1995 21:54:26
Links: << >>  << T >>  << A >>
In article <3j6qgl$4la@euas20.eua.ericsson.se> ekatjr@eua.ericsson.se (Robert Tjarnstrom) writes: >From: ekatjr@eua.ericsson.se (Robert Tjarnstrom) >Subject: Power gain when moving from FPGA to Gate Array >Date: 3 Mar 1995 10:20:05 GMT >What is the experiences of reduction in power dissipation/consumption when moving an FPGA design to a Gate Array. >Obviously there are two different situations >A) 1 FPGA -> 1 Gate Array. Is a factor 4 power gain reasonable to expect? Depends upon the device utilization of the FPGA..... For instance in a 5K gate FPGA, with useable gate count of 2K..... there is a 2.5 reduction there.... but it really depends on the vendor of the FPGA, Gate array.... what process do they use ( size of cell... .6, .8u etc... ) >B) N FPGA -> 1 Gate Array. Here the power gain should be considerably larger due to reduced io power. >Opinions are welcome >Robert Tjarnstrom Wassail, MjodalfR  Article: 806 Subject: RE: FPGA Custom Computing Machine From: hutch@timp.ee.byu.edu (Brad Hutchings) Date: 3 Mar 1995 23:33:06 -0700 Links: << >> << T >> << A >>  In article <3j873u$61c@paperboy.ids.net>, randraka@ids.net writes:
|>
|> FCCM is April 11-14 ...

Correction. It is April 19-21. The correct date is on the mosaic page.

--
Brad L. Hutchings (801) 378-2667          Assistant Professor
Brigham Young University - Electrical Eng. Dept. - 459 CB - Provo, UT 84602
Reconfigurable Logic Laboratory


Article: 807
Subject: Re: IST Drying Up In North America
From: Jonathan AH Hogg <jonathan@dcs.gla.ac.uk>
Date: Sat, 4 Mar 1995 17:14:18 GMT
Links: << >>  << T >>  << A >>
On Thu, 2 Mar 1995, John Cooley wrote:

> As far as I know, Actel was the only company that signed an OEM deal with IST
> to resell their software.  If you were a user of this tool either through the
> Actel OEM process or through some other independant IST sales channel, I'd
> like to hear what you thought of it in a technical sense.  Was this a case of
> the financial people killing a possibly good product because they saw its
> market as too competitive, was the tool they were offering buggy & ugly or
> FPGA synthesis tools?

Compass Design Automation also resold ISTs FPGA optimisation tools. the
truth is that they were very buggy. i would say that IST was a
particularly badly run company who put very little effort into supporting
their tools.

this is just my personal opinion after working at Compass for three
months.

\onathan
(__)

--
Jonathan AH Hogg, Computing Science Department, The University, Glasgow G12 8QQ
jonathan@dcs.gla.ac.uk  http://www.dcs.gla.ac.uk/~jonathan  0141 339 8855 x2069


Article: 808
Subject: Re: area of RAM cells in FPGAs
From: Alfred <100441.524@CompuServe.COM>
Date: 4 Mar 1995 21:54:29 GMT
Links: << >>  << T >>  << A >>
SRAM in ATmel FPGAs takes up abouut 3.5 cells per bit for
standard SRAM, register files can be more efficient. The new
Atmel architecture (which will be introduced inthe second half of
96) will be much better in terms of on-chip SRAM.


Article: 809
Subject: Re: Questions of implementing asynchronous circuits using FPGAs.
From: murray@src.dec.com (Hal Murray)
Date: 5 Mar 1995 06:12:16 GMT
Links: << >>  << T >>  << A >>
In article <199502280015.TAA02793@play.cs.columbia.edu>, cheng@news.cs.columbia.edu (Fu-Chiung Cheng) writes:
|> We are considering implementing asynchronous circuits using FPGAs.
|> We need to choose FPGAs such that hazard-free logic can be realized
|> and the FPGAs can be reprogrammable in circuits.

I've worked with the Xilinx 3000 series parts but don't know much about any other
FPGAs.

I'm not quite sure what you want to do.  I think you want to build self timed
state machines using RS flip flops made out of gates rather the traditional
edge triggered logic where everything runs off the same clock.

First, the good news:

If you look carefully in their app notes, you can find a place where they will
promise that if you only change one input to a CLB at a time, the output won't
glitch.

I occasionally build RS flip flops out of gates.  They work.

The architecture of the part encourages/expects designs where most of the logic
runs on the same clock edge.  The software does too. If you don't design that way
you are giving up a major fraction of the resources on the chip.  That may be OK
for an educational experiment.

Examples:

Within a CLB, there are feedback paths from the FFs to the input.  If you
want similar feedback without using a FF you have to use external routing
resources.

The FFs have a clock-enable.  This can frequently be used as an extra
logic input.

The FFs have an asynchronous reset input.  Again, this is occasionally handy
as an extra logic term.  (I try to avoid it because it is asynchronous.)  It
is very handy for (re)initialization.

There is also a reset pin on the chip that clears all the FFs in the chip.
Again, handy for smashing things back into a known state.

The timing analyzer knows how to analyze clock-FF=>gates=>setup-FF paths.  If
you have feedback paths within your gates it will scream at you.

So, yes, you can do it.  It may not be much fun.


Article: 810
Subject: Re: Power gain when moving from FPGA to Gate Array
From: devb@elvis.vnet.net (David Van den Bout)
Date: 5 Mar 1995 14:17:25 -0500
Links: << >>  << T >>  << A >>
>   Depends upon the device utilization of the FPGA..... For instance in a 5K
>gate FPGA, with useable gate count of 2K..... there is a 2.5 reduction
>there.... but it really depends on the vendor of the FPGA, Gate array.... what
>process do they use ( size of cell... .6, .8u etc... )

Not quite.  Since the unused logic blocks are usually not clocked, then
there is usually not much power drain to them.

--

||  Dave Van den Bout  ||
||  Xess Corporation   ||


Article: 811
Subject: Re: Limits on on-chip FPGA virtual computing
Date: 5 Mar 1995 21:38:26 GMT
Links: << >>  << T >>  << A >>
I'm very interested in this argument. I have also heard different opinion.
Someone says that FPGA need too much area that could be spend for specialized
VLSI functions in a 1/10 of area. Others say that FPGA represents a speed-up
factor for implementing at run time specialized function that can be managed by
the compiler. An interesting paper is from Smith published in MICRO27, if you
are interested I can send the www address.

The articles you cite are electronically available? It happens that I can't have
send me a copy of the papers?

Alessandro De Gloria

Dong-Lok Kim (dong@icsl.ee.washinton.edu) wrote:
: Hi,

: I happened to read a paper
: 	"Area & Time Limitations of FPGA-based Virtual Hardware",
: 	Osama T. Albaharna, etc, IEEE International Conference
: 	on Computer Design 1994, pp. 184-189.
: and found a quite interesting fact from it as follows:

: The author says they found the FPGA area to implement certain set of circuits
: has overhead of ~100 times compared to the fixed VLSI implementation. Also, the
: delay overhead is ~10 times, so using the FPGA on the same die with a RISC core
: would not be feasible with the current technology. (Please forgive me if
: I am misinterpreting his conclusion).

: Another paper that attempts such an integration of FPGA and a CPU core on the
: same chip was
: 	"DPGA-Coupled Microprocessors: Commodity ICs for the Early 21st
: 	Century", Andre DeHon, IEEE Workshop on FPGAs for Custom Computing
: 	Machines, 1994, pp. 31-39.
: Here, the author just seems to assume that the integration of the FPGA area
: on the same die is feasible (very well phrased, but without any proof).

: I just wonder if any of you want to discuss about the limits of this
: idea (i.e., on-chip FPGA + CPU core) and the above papers. I can provide
: further information about the papers if you want. (Do the authors read this
: news group?)

: --
: Donglok Kim

: ICSL (Image Computing Systems Lab)
: Dept. of Electrical Eng., FT-10
: University of Washington
: Seattle, WA 98195

: Phone) 543-1019
: FAX)   543-0977


Article: 812
Subject: Re: area of RAM cells in FPGAs
From: lemieux@eecg.toronto.edu (Guy Gerard Lemieux)
Date: 5 Mar 95 22:19:03 GMT
Links: << >>  << T >>  << A >>
In article <3janil$q4b$1@mhadf.production.compuserve.com>,
Alfred  <100441.524@CompuServe.COM> wrote:
>SRAM in ATmel FPGAs takes up abouut 3.5 cells per bit for
>standard SRAM, register files can be more efficient. The new

i think you are saying it takes 3.5 logic cells to emulate an SRAM
bit in an atmel fpga, but i don't think this answered the author's
question.

the question was how much die area (as a percentage the size of a
logic cell, for instance) does an SRAM configuration bit take?  eg,
how big is a bit in the lookup table, a bit which tells the flip-flop
whether to operate in D/T/JK modes, etc.  you know... those bits that
get blasted in at power-up to configure the device :-)

i think we can safely form an upper bound on the size of the SRAM
cell := die size / number of configuration bits.

SRAM cell (think of a pass-gate and 2 inverters) which can be chained
together to scan in the configuration bits.  you would need two of these
per configuration bit and a two-phase non-overlapping clock to shift
the data.  if you can guarantee a minimum clock speed, you can probably
replace one of these cells with just a pass gate and dynamic latch (i.e.
inverter).  thus, 8 transistors per configuration bit.

i have designed something similar in a 1.2 micron double-metal single-
poly process, and the size was about 45 x 32 microns for two 5-transistor
cells (i.e., a single bit).  being a novice rectangle hack, i must admit
that this far from optimal.  i realized later that i could easily have saved
10-20% in area by rearranging things.  also, note that the scan-path is
not time critical so removing those extra 2 transistors, resizing the others,
and possibly using poly instead of metal would probably save 50%.
finally, another layer of metal or poly (as is common in current fpga
processes) would help quite a bit.

guy


Article: 813
Subject: How to daisy-chain FPGAs in software?
Date: 5 Mar 1995 19:52:22 -0500
Links: << >>  << T >>  << A >>
I have a couple of questions on programming with Altera FPGA devices.
I have written some  code using PLDshell Plus / PLDasm and the design does
not fit  in one device (We have a NFX780_84).

(1) I know that I can write different modules in different files and then
MERGE the design. I guess this does not help in fitting the design in more
than one device. In any case, if I use the MERGE option, does PLDshell
consider one device per file OR does it merge all the files into ONE
device?

(2) If MERGE does not consider one device per file, how do I compile each
module or group of modules into one device and the rest in one or more
devices and then simulate the whole design? In other words, how can I
program one design (with many modules) in two or more devices?

I would appreciate any pointers to the above queries. Thanks for your time
and help.

SIncerely,
Shakuntala

--------------------------------------------------------------------------
--------------
OR
sanjanai@buster.eng.ua.edu
--------------------------------------------------------------------------
--------------


Article: 814
Subject: Re: Power gain when moving from FPGA to Gate Array
From: satwant@regulus (Satwant Singh)
Date: Mon, 6 Mar 1995 14:53:11 GMT
Links: << >>  << T >>  << A >>
Robert Tjarnstrom (ekatjr@eua.ericsson.se) wrote:
: What is the experiences of reduction in power
: dissipation/consumption when moving an FPGA design to a Gate Array.

The main source of power reduction when moving
a design from an FPGA to an MPGA (Mask Programmable
Gate Array) is the reduction in capacitance (mainly
from the programmable routing) that has to switch
with each clock or data transition. The other
lesser source could be the reduction in short-circuit
current due to the smaller and fewer internal buffers
in an MPGA.

So, an estimate of the power reduction, in moving
a design from an FPGA to an MPGA, can be computed
from the reduction in delays of the various logic
paths. The reduction in short-circuit current may
also be computed in the same way.

This could be a good research topic to extrapolate
the predicted degradation of circuit speeds in FPGAs
vs. MPGAs, and come up with a predicted increase
in power. For example, some estimates suggest that
the FPGAs could be up to 3 to 10 times slower than MPGAs.
How much more power the same circuit will consume
in in FPGA, if the clock rate is kept the same?

Enough rambling!

Satwant.


Article: 815
Subject: Re: Limits on on-chip FPGA virtual computing
Date: 6 Mar 1995 08:56:17 -0700
Links: << >>  << T >>  << A >>

In article <3jdb0i$6cn@alpha.cisi.unige.it>, adg@PROBLEM_WITH_INEWS_DOMAIN_FILE (alessandro de gloria) writes: |> the compiler. An interesting paper is from Smith published in MICRO27, if you |> are interested I can send the www address. |> Please post the www address here. -- Brad L. Hutchings (801) 378-2667 Assistant Professor Brigham Young University - Electrical Eng. Dept. - 459 CB - Provo, UT 84602 Reconfigurable Logic Laboratory  Article: 816 Subject: Re: Comp.Arch.FPGA Reflector V1 #152 From: eacosta@media.mit.edu Date: Mon, 6 Mar 1995 16:11:21 GMT Links: << >> << T >> << A >>  >> There might be a significant overhead when compared to a fixed VLSI solution >> both real-estate- and performance-wise, but when you compare trying to implement >> the same features with a software-only solution the odds are reversed. >> Hi there, I'm joining the fray a little late on this thread, but I just can't believe the direction its taking. Forgetting about software comparisons and focusing on "fixed VLSI solutions" for a moment, why would you ever compare a "fixed VLSI solution" to an equivalent FPGA implementation?? As my collegue colleague Andre Dehon at the AI Lab points out in defense of his work, no reasonable person would ever do such a thing. And thus statements like you lose a factor of ~100 in density are pointless. You just can't make any reasonable comparison between apples and oranges. Look, modern high performance processors spend the good majority of their transistors doing things like optimizing cache performance, adding more ports to a register file, etc. ASICs also spend many transistors broading their application base to insure that their development costs can be amortized over a large enough user base to be cost effective. But FPGAs, by virtue of the fact that they are dynamically reconfigurable, do not have to waste logic resources on any of that extra "stuff" that make GPPs and ASICs go. They allow you to implement architectures that are absolutely specific to you task. If your task changes, no problem, just drop a new configuration onto your working set. That is the essence of custom computing. Dynamic reconfiguration removes the need to lay down all that extra "stuff" in silicon. Thus designs targeted for FPGAs that take advantage of this fact would use far fewer gates than their "fixed VLSI design" counter parts. The relevent, and really interesting, question is: For any fixed VLSI design (Here I mean ASICs) how few gates can you get away with at any one time in an equivalent FPGA design (taking advantage of D.R.) that accomplishs the same task? Has anyone done any work in this area? I'd certainly be interested to hear of it. I'll personally offer several of my favorite lollipops to the first person that writes up a study on this topic for a set of image processing type applications. You'd better hurry though, I'll be attacking this next month, and I like lollipops. Also just a really stray thought, does the old economics of silicon area really apply in an age of more silicon than we know what to do with and custom computing on the rise? How much is enough silicon area? Is there a point beyond which we don't care? Has anyone else had this strange guttural feeling? I'd be interested in hearing! Thanks, Ed Acosta ---------------------------------------------------------------------------- | Edward Acosta phone: (617)253-2241 | Edward Acosta | | Research Associate fax: (617)258-6264 | 4 Longfellow Place #1504 | | MIT Media Laboratory | Boston, MA 02114 | | E15-319 | (617)227-9338 | | 20 Ames St. | | | Cambridge, MA 02139 | | ---------------------------------------------------------------------------- | Why go 90% of the way for 25% of the reward when you can go 100% of the | | way for all of it! | ----------------------------------------------------------------------------  Article: 817 Subject: Re: Lattice ispLSI starter kit From: Scott Bierly <bierly@era.com> Date: Mon, 6 Mar 1995 23:05:11 GMT Links: << >> << T >> << A >> Subject: Re: Lattice ispLSI starter kit From: Sami Sallinen, sjs@varian.fi Date: 3 Mar 1995 08:11:48 GMT In article <3j6j04$v6@idefix.eunet.fi> Sami Sallinen, sjs@varian.fi
writes:
>iisakkil@alpha.hut.fi (Mika Iisakkila) wrote:
>>
>> Check out ftp.intel.com. PLDShell should be still available despite
>> that the business was bought out by Altera. I haven't tried yet if the
>> fuse maps generated for iPLD22V10 work with the ispGAL22V10, but I
>> can't see why not. The software even includes a simulator - I'd use
>> the Intel/Altera parts, if only I had a programmer...
>> --

I have successfully programmed GAL22V10 (unfortunately, not the ISP
variety) using the free PLDShell software.  I would expect that it should
work for ISP parts too.


Article: 818
Subject: Partitioning and synthesis
From: singla@liz.ece.scarolina.edu (Ashutosh Singla)
Date: 6 Mar 1995 23:06:15 GMT
Links: << >>  << T >>  << A >>
Hi all,

Somebody had earlier posted an article with the same subject with a lot of questions.
I want  to add one more question.

What is the basis of partitioning the circuit among multiple devices?

Thanks

Ashutosh


Article: 819
Subject: DSP in FPGA ?
Date: 6 Mar 1995 22:53:56 -0800
Links: << >>  << T >>  << A >>

Hi everybody,

I want to implement a special purpose DSP algorithm in FPGA,
hoping that it will be faster,cheaper than the
alternatives which are:

A general purpose DSP microprocessor
An ALU chip using an external state machine

the ASIC would basically be a 32-bit multiplier,adder, half
a dozen or so registers, 3 busses, and an FSM implementing
the algorithm. I want to put many of these on a single chip
and exploit parallelism.

I want to evaluate this idea without
having to spend X thousand dollars on development software
and a device programmer.

Have there been any recent articles or papers on this particular topic ?
Has anyone been down this road and can let me know whether
"chip densities and cost still have a ways to go yet"
or
"FPGA really are cheap and fast, go for it"

Are there any vendor application engineers reading this group ?

FPGA consultants ? - Are there customizable
circuits like OAK/PINE but for FPGA ?

I suspect that the prices of the really high density devices
have a ways to go before FPGA u-processors can compete.

-rob


Article: 820
Subject: Re: FPGA Custom Computing Machine
From: kugel@mp-sun6.informatik.uni-mannheim.de (Andreas Kugel)
Date: 7 Mar 1995 07:57:03 GMT
Links: << >>  << T >>  << A >>
>
>
>FCCM is April 11-14 in NAPA.
Wrong: April 19 - 21

>http://www.super.org:8000/FPGA/comp.arch.fpga.

or http://www.super.org:8000/FPGA/fccm95.html

---

--------------------------------------------------------
Andreas Kugel
Chair for Computer Science V      Phone:(49)621-292-5755
University of Mannheim            Fax:(49)621-292-5756
A5
D-68131 Mannheim
Germany
e-mail:kugel@mp-sun1.informatik.uni-mannheim.de
--------------------------------------------------------


Article: 821
Subject: Implementing Asynchronous Circuits
From: brown@cs.cornell.edu (Geoffrey Brown)
Date: Tue, 7 Mar 1995 13:14:05 GMT
Links: << >>  << T >>  << A >>
Before implementing asynchronous circuits on FPGAs it might
be helpful to pin down your goals.  If you are interested in
testing an algorithm implemented from asynchronous building blocks
(a la Philips labs) or trying out the micropipelines approach, an alternative
is to use a fully synchronous implementation.  John O'Leary and I have a paper
exploring this idea which I will send to anyone who is interested.

Geoffrey Brown

Synchronous Emulation of Asynchronous Circuits

We present a novel approach to prototyping asynchronous circuits
which uses clocked field programmable gate arrays (FPGAs).  Unlike
other proposed techniques for implementing asynchronous circuits on
FPGAs, our method does not attempt to preserve the pure asynchronous
nature of the circuit. Rather, it preserves the communication
behavior of the circuits and uses synchronous duals for common
asynchronous modules.


Article: 822
Subject: Re: Questions of implementing asynchronous circuits using FPGAs.
From: arvin@kihelektro.kih.no (Arvin Patel)
Date: Tue, 7 Mar 1995 09:04:53
Links: << >>  << T >>  << A >>
In article <199502280015.TAA02793@play.cs.columbia.edu>
cheng@news.cs.columbia.edu (Fu-Chiung Cheng) writes:

>Question 4. Any Email-address, WWW, or tel. no. related to the above products
>            are welcomed.
>

>        Thanks a lot in advance.

>                                                -John
>                                                Email: cheng@cs.columbia.edu

Xilinx                               http://www.xilinx.com/xilinx.htm

AMD   (make PLDs)    http://www.amd.com/

Arvin


Article: 823
Subject: hypertext PLDasm manual available online
From: devb@elvis.vnet.net (David Van den Bout)
Date: 7 Mar 1995 13:39:59 -0500
Links: << >>  << T >>  << A >>
XESS Corp. has just released the second chapter of
"FPGA Workout II".  This chapter covers the PLDasm
hardware description language that's used to program
PLDs and FPGAs.  It's a hypertext document
that will execute on a DOS machine with a VGA display.

If interested, you can retrieve this file via
anonymous FTP from ftp.vnet.net in directory
pub/xess/hyperdoc.  Get the ZIPPED and executable
file pldasm.exe and install.txt.
--

||  Dave Van den Bout  ||
||  Xess Corporation   ||


Article: 824
Subject: Cost of FPGA
From: saghir@ece.utexas.edu (Saghir A. Shaikh)
Date: Wed, 08 Mar 1995 00:22:15 +0500
Links: << >>  << T >>  << A >>
Dear Fellows:

I have write about the estimation of cost for possible fabrication of a a
massively parallel system (probably MCM) using FPGA tech. COuld you give
some hints that how can I get the standard figures
amount/gates etc. and breakup of cost etc.

Also the same info is required for ASIC design tech.

any suggestions!

Saghir

--
Peace
Saghir A. Shaikh               Email:saghir@ece.utexas.edu