Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 5050

Article: 5050
Subject: Re: Safety Critical Apps -> Xilinx Checker.
From: "Steven K. Knapp" <optmagic@ix.netcom.com>
Date: 16 Jan 1997 19:57:19 GMT
Links: << >>  << T >>  << A >>
I have seen such a system implemented.  As you mentioned in your posting, a
Xilinx FPGA device can perform a readback operation.  With a small amount
of extra logic, the FPGA device can perform a self-check.  It reads back
its bitstream and compares it against the values stored in the
configuration PROM.  A second PROM is required to hold the mask values, as
the readback bitstream contains the values of the internal logic block and
I/O block flip-flops (these would not be constant).

If the readback and the configuration values don't match, then the FPGA
causes a re-boot.  Of course, this doesn't take into account bit-upsets in
the configuration PROM.

The Xilinx software can create a PROM file with the MakeMask option in
Makebits.  The details are a bit sketchy (I saw this about five years ago)
but it has been done successfully.  E-mail me directly if you have more
question.

As a side note, SRAM cells are used not only in FPGAs but also in most CPLD
devices to hold configuration bits that require decreased capacitive
loading.  These SRAM cells are not like those used in SRAM memory devices. 
In SRAMs, an extremely-high impedance resistor to VCC and ground permits
faster write access and the expense of VCC stability.  In most FPGAs/CPLDs,
there is lower-impedance resistor to VCC and ground the provides increased
cell stability.  The write performance is not important in FPGAs/CPLDs. 
The difference is about a million-to-one improvement in stability.


Steven Knapp
E-mail:  optmagic@ix.netcom.com
Programmable Logic Jump Station:  http://www.netcom.com/~optmagic

Hans Tiggeler <ees1ht@ee.surrey.ac.uk> wrote in article
<5bkroa$ru4@info-server.surrey.ac.uk>...
> Just out of interest has anybody implemented a continuous checking
watchdog on 
> a Xilinx FPGA? 
> 
> Xilinx provides a configuration read back option, using the same device
or 
> some external CRC checker the system can continuously be checked for 
> configuration errors due to SEU. The error latency will of course depends
on 
> the configuration size and read back speed (1 or 8MHz?). To improve on
this 
> the user can for example add redundant logic and boundary scan
techniques. 
> Total dose tolerance can be improved by different packaging techniques
such as 
> RADPACK (tm) and SEL can be addressed with a fast electronic fuse. 
> 
> The unknown of course in this system is, can you damage a Xilinx device
by 
> randomly changing configuration bits?

Article: 5051
Subject: Meta Assembler wanted
From: "Thomas J. Loftus" <tloftus@hns.com>
Date: Thu, 16 Jan 1997 16:16:57 -0500
Links: << >>  << T >>  << A >>
Hello everybody,

Does anyone know of a meta-assembler which might be cheaper/newer than
the one offered by Hilevel Technology Inc.?  I want to generate
programs for an ASIC embedded sequencer with a custom instruction set.

For those who aren't familiar with a meta-assembler, it's a software
product which can be used to define instruction sets for custom
sequencers.  Years ago, I used it for 2900 family sequencers,
now I would like to use something like it in an ASIC design for
an embedded sequencer, similar to a co-processor.

The one from Hilevel (HALE) is old, runs on DOS and they quote $2300.
I think I can do better than this and hope somebody can help me.

Please reply directly to tloftus@hns.com because I don't get to 
these newsgroups very often.

Tom



-- 
Thomas J. Loftus             |   Electrical Design Automation Group
phone: (301) 548-1916        |   Hughes Network Systems
email: tloftus@hns.com       |   11717 Exploration Lane
FAX:   (301) 212-2099        |   Germantown, MD  20876

All statements reflect my personal views and not necessarily that of
HNS.
Article: 5052
Subject: Re: ASICs Vs. FPGA in Safety Critical Apps.
From: ees1ht@ee.surrey.ac.uk (Hans Tiggeler)
Date: 17 Jan 1997 12:46:41 GMT
Links: << >>  << T >>  << A >>
In article <01bc03d5$5f2f8b40$6e0db780@Rich>, rich.katz@gsfc.nasa.gov says...
>
>designs of this type have been done before and we called it a poor man's
>edac 
Why? voting systems are used in a lot of other engineering disciplines 
Sometimes the ability of handling multiple bit errors is more desirable than 
the two extra memory chips (or add extra logic). Ever tried to correct two 
errors using a modified hamming code?
(and it was done when they had 1 kbit memories <- not a typo).  didn't
>even need an fpga, the 54ls253 makes a great voter w/ only one half and by
>adding an inverter the other half can function as an error indicator.  and
>the state machine wasn't tough and the read cycle had to be transformed
>into a read-modify-write; this was important for consistent software
>timing/verification in the presence of errors.
My point exactly,the beauty of this system is it's simplicity, perhaps we 
wouldn't have so many failures if we used more "poor man's" techniques. The 
FPGA was chosen to include glue and error counting logic.
>
>rk
>
>p.s. and i did a 16-bit flow through edac in a 1020 which had the ability
>(when using x8 ram chips) to swap out failed chips in the real memory and
You sounds like a very good engineer,

Hans.

Article: 5053
Subject: Re: ASICs Vs. FPGA in Safety Critical Apps.
From: "Rich K." <rich.katz@gsfc.nasa.gov>
Date: 17 Jan 1997 15:11:29 GMT
Links: << >>  << T >>  << A >>


Hans Tiggeler <ees1ht@ee.surrey.ac.uk> wrote in article
<5bnsbh$7et@info-server.surrey.ac.uk>...
> In article <01bc03d5$5f2f8b40$6e0db780@Rich>, rich.katz@gsfc.nasa.gov
says...
> >
> >designs of this type have been done before and we called it a poor man's
> >edac 
> Why? voting systems are used in a lot of other engineering disciplines 

in these 'prehistoric' days you couldn't go to the store and get an edac
chip and fpga's weren't invented.  it used dumb, simple brute force
techniques.  to get speed, you had to do an asic ($, time) and a discrete
edac implementation was too slow.  tmr is very fast, a single gate delay
(54ls253).  and tmr systems are used all the time, and very handy for
making soft fpga memory elements effectively hard.  these is reasonable for
fpga's, where there are plenty of gates.  implementing this at the discrete
level, i.e., for seu-protection of 54lsxx circuits, was kind of bulky and
using a newer technology (rad-hard sandia cmos) was the way we went.

rk

Article: 5054
Subject: Able to reverse a .JED back to logic?
From: Encore Electronics <encore@shell.global2000.net>
Date: 17 Jan 1997 15:34:08 GMT
Links: << >>  << T >>  << A >>
Greetings... our customer generates JEDEC files for boards we build for
him, and we do the programming of Lattice in-system-programmable parts on
the board. I've got an older version of the code in .JED format, and a
newer version he just created from scratch after losing the original
source logic code (.LIF or .LDF files). The newer version naturally
doesn't work, and we don't know exactly what's in the older one. 

Is there a way to reverse-compile the working .JED file to something
humanly-understandable that we can tweak, so he can re-compile it again? 
I'm not too keen on the idea of taking the fuse map and writing out on a
sheet of paper to see what fuses are blown and open, and then figuring out
the logic from that.

The part in question is the ispLSI1032... fortunately we haven't had this
problem (yet) with the much larger ispLSI3256 he's also using on the
board.

Help?

Tom Moeller
Encore Electronics

Article: 5055
Subject: Xilinx swap space
From: joe@iscm.ulst.ac.uk (Joe Blake)
Date: 17 Jan 1997 16:44:52 GMT
Links: << >>  << T >>  << A >>
How do you set the swap space in Xilinx ver 5.1.0 on DOS 6.22 ?
Many thanks in advance.
Joe
Article: 5056
Subject: Advice request
From: Ben Chaffin <98bcc@williams.edu>
Date: Fri, 17 Jan 1997 15:49:04 -0800
Links: << >>  << T >>  << A >>
Hello,
	I am a computer science major at Williams College. I am working 
on a computational number theory problem for one of my professors. I have 
implemented it in C and optimized it to death and am now considering 
building a special purpose computer to speed it up. So, along with 
everyone else, I'm trying to find the best balance between a limited 
budget and speed. I have a design for discrete chips that should run 
about 5 times as fast as a P-120. The cost is $300-$400, which isn't bad, 
but the circuit involves nearly 200 chips, with about 3000 pins. That's a 
lot of wiring. The algorithm involves:

	Two parallel 128-bit additions of internal-only data
	Storage for the result (256 bits)
	Two 48-bit conditional inversions using XOR gates
	Two priority encodings across 48 bits
	Some counters, flip-flops and simple control circuitry
	Around 50 bits of output

	It's not a particularly complicated circuit, just large. I've 
been looking at Motorola's MPA1036 chips -- I think that two of these 
would do the job, and leave plenty of room for neat parallel addition 
algorithms and such. I think I would be able to borrow an EPROM 
programmer to store the configuration. But I'm pretty new to digital 
hardware in general, and especially to FPGA's. So here are my questions:

	Are there faster/cheaper/better chips than the MPA series for 
this type of computation? I have nothing available to me except computers 
-- no commercial design software. Reprogrammability is a plus, since I'm 
new and the chances I'll get everything right the first time are slim.

	Is there a way to import a schematic from Chipmunk's diglog into 
a layout program such as Motorola's?

	Does anybody have an estimate of how long a 128-bit ripple-carry 
addition would take? (i.e., is it worth it to use FPGA's?)

	Any answers, advice, or general comments would be useful. Thanks 
very much.

	Ben Chaffin
	98bcc@williams.edu
Article: 5057
Subject: advice request
From: Ben Chaffin <98bcc@williams.edu>
Date: Fri, 17 Jan 1997 15:56:04 -0800
Links: << >>  << T >>  << A >>
Hello,
        I am a computer science major at Williams College. I am working
on a computational number theory problem for one of my professors. I have
implemented it in C and optimized it to death and am now considering
building a special purpose computer to speed it up. So, along with
everyone else, I'm trying to find the best balance between a limited
budget and speed. I have a design for discrete chips that should run
about 5 times as fast as a P-120. The cost is $300-$400, which isn't bad,
but the circuit involves nearly 200 chips, with about 3000 pins. That's a
lot of wiring. The algorithm involves:

        Two parallel 128-bit additions of internal-only data
        Storage for the result (256 bits)
        Two 48-bit conditional inversions using XOR gates
        Two priority encodings across 48 bits
        Some counters, flip-flops and simple control circuitry
        Around 50 bits of output

        It's not a particularly complicated circuit, just large. I've
been looking at Motorola's MPA1036 chips -- I think that two of these
would do the job, and leave plenty of room for neat parallel addition
algorithms and such. I think I would be able to borrow an EPROM
programmer to store the configuration. But I'm pretty new to digital
hardware in general, and especially to FPGA's. So here are my questions:

        Are there faster/cheaper/better chips than the MPA series for
this type of computation? I have nothing available to me except computers
-- no commercial design software. Reprogrammability is a plus, since I'm
new and the chances I'll get everything right the first time are slim.

        Is there a way to import a schematic from Chipmunk's diglog into
a layout program such as Motorola's?

        Does anybody have an estimate of how long a 128-bit ripple-carry
addition would take? (i.e., is it worth it to use FPGA's?)

        Any answers, advice, or general comments would be useful. Thanks
very much.

        Ben Chaffin
        98bcc@williams.edu
Article: 5058
Subject: Re: advice request
From: Ray Andraka <randraka@ids.net>
Date: Fri, 17 Jan 1997 20:42:32 -0800
Links: << >>  << T >>  << A >>
Ben Chaffin wrote:
> 
> Hello,
>         I am a computer science major at Williams College. I am working
> on a computational number theory problem for one of my professors. I have
> implemented it in C and optimized it to death and am now considering
> building a special purpose computer to speed it up. So, along with
> everyone else, I'm trying to find the best balance between a limited
> budget and speed. I have a design for discrete chips that should run
> about 5 times as fast as a P-120. The cost is $300-$400, which isn't bad,
> but the circuit involves nearly 200 chips, with about 3000 pins. That's a
> lot of wiring. The algorithm involves:
> 
>         Two parallel 128-bit additions of internal-only data
>         Storage for the result (256 bits)
>         Two 48-bit conditional inversions using XOR gates
>         Two priority encodings across 48 bits
>         Some counters, flip-flops and simple control circuitry
>         Around 50 bits of output
> 
>         It's not a particularly complicated circuit, just large. I've
> been looking at Motorola's MPA1036 chips -- I think that two of these
> would do the job, and leave plenty of room for neat parallel addition
> algorithms and such. I think I would be able to borrow an EPROM
> programmer to store the configuration. But I'm pretty new to digital
> hardware in general, and especially to FPGA's. So here are my questions:
> 
>         Are there faster/cheaper/better chips than the MPA series for
> this type of computation? I have nothing available to me except computers
> -- no commercial design software. Reprogrammability is a plus, since I'm
> new and the chances I'll get everything right the first time are slim.
> 
>         Is there a way to import a schematic from Chipmunk's diglog into
> a layout program such as Motorola's?
> 
>         Does anybody have an estimate of how long a 128-bit ripple-carry
> addition would take? (i.e., is it worth it to use FPGA's?)
> 
>         Any answers, advice, or general comments would be useful. Thanks
> very much.
> 
>         Ben Chaffin
>         98bcc@williams.edu

Many of the other vendors sell their software at prices starting around
$1,000. If you are not intending to do many designs, it is probably not
cost effective for you to go and buy the software.  However, since you
are essentially doing the work for an educational institution, you may
qualify for one of the special educational programs many of the vendors
have.

As you may already be aware, a ripple adder is the slowest
implementation. There are a number of carry speed up schemes that can
help (at the cost of more cells used).  For smaller adders, the amount
of improved performance is generally eclipsed by the longer delays in
each adder bit for FPGAs.  However for big adders, there is plenty to
gain by doing some form of radix addition with carry between the
digits.  

Where you are apparently doing this as a numerical processor that
doesn't really have any timing constraints to the outside world, I would
heavily pipeline the design to pick up the throughput (that includes
internally pipelining the 128 bit adders). If the process is one that
can be performed by splitting into identical parallel paths, you can do
it using a bunch of identical parallel bit-serial processors (see my
papers on my website).  For a 128 bit straight ripple adder your looking
at somewhere around 150-200ns (rough estimate, really depends on speed
grade, architecture, etc.). If you do it with bit serial hardware, the
bit rate can often be better than 100MHz.  So for a 128 bit add, it'll
take somewhere around 1-2us with less than 1/100th of the logic
otherwise required.

Generally speaking, you can expect performance improvements of anywhere
from 10x to 1000x over software by using an FPGA.
 
As far as better/faster etc., I find look-up table type devices a little
easier for the uninitiated. They generally do not have as many
functionality and routing restrictions at the cell level.  The fine
grain architectures can often be pushed a little faster, but you have to
tailor the design to the architecture to gain that advantage (not likely
to happen on your first FPGA design). Of course, this is not much of an
issue if the vendor has decent libraries and you are not shooting for
high performance (relative to maximum clock speeds) or high density. For
the average user, it really comes down to personal taste (and usually
your employer's preference), and what you have experience with.

I am not familiar with chipmunk software. Odds are you won't be able to
translate between it and any of the FPGA fitters unless it outputs in a
format compatible with the more standard capture tools and it can read
the library for the target device. 

Perhaps a part like the Lattice Semiconductor ISP parts will do the
trick.  I think Lattice may distribute a free basic entry tool for some
of their CPLDs.  Those parts are EEPROM based, are programmed in circuit
via a 5 wire interface, and the programmer is any PC.  They are more
limited in the number of registers than an FPGA, but if you do it bit
serially, there should be no problem fitting.

Some of the vendors you may want to look at are: Actel, Altera, AMD,
Atmel, Crosspoint, Cypress, Lattice, Lucent, QuickLogic, National
Semiconductor, Xilinx, and Zycad (sorry if I missed anyone).  Actel,
Altera and Xilinx are the 600 lb gorillas in the industry.  Motorola is
a newcomer, so you'll find the experience level is less.

Hope this helps a little.

-Ray Andraka, P.E.
Chairman, the Andraka Consulting Group
401/884-7930   FAX 401/884-7950
mailto:randraka@ids.net
http://www.ids.net/~randraka/
 
The Andraka Consulting Group is a digital hardware design firm
specializing in high performance DSP designs in FPGAs. Expertise
includes reconfigurable computers, computer arithmetic, and maximum
performance/density FPGA design.
Article: 5059
Subject: Re: ANNOUNCE 8051/8052 microcontroller model now available for FPGA
From: jim granville <Jim.Granville@xtra.co.nz>
Date: Fri, 17 Jan 1997 22:17:35 -0800
Links: << >>  << T >>  << A >>
David Baker wrote:
> 
>  Available now.  Richard Watts Associates RAW8052 core targeted
> to Altera FLEX10K FPGA.
> 
> Occupies approx 1/3 of the logic of a 10K100 part.
> 
> Netlist (VHDL, Verilog) or VHDL source available for seamless
> migration to ASIC.
> 
> To obtain a free evaluation version check our page at
> www.evolution.co.uk or contact richardw@evolution.co.uk.

How much is it, and how fast does it run ?
-- 
===============  Serious Design Tools for uC and PLD  ================
= Optimising Structured Text compilers for ALL 80X51 variants
= Reusable object modules, for i2c, SPI and PLDwire bus interfaces
= Safe, Readable & Fast code - Step up from Assembler and C
= Emulators / Programmers for ATMEL 89C1051, 2051, 89C51 89S8252 89C55
= HW : IceP2051, Emul517C, TRICE-31X, PC CANport
= for more info, Email : DesignTools@xtra.co.nz    Subject : c51Tools


Article: 5060
Subject: Re: Any PEEL22CV10A replacements with more capacity?
From: jim granville <Jim.Granville@xtra.co.nz>
Date: Fri, 17 Jan 1997 22:20:41 -0800
Links: << >>  << T >>  << A >>
Thomas C. Jones wrote:
> 
> Does anyone know of a pin-compatible replacement to the PEEL22CV10A
> offering greater routing capacity and resources?
> 
> -Tom
> tjones@aspect.com
 Look at the ATMEL ATV750BL, ( Dip24) this has 20 FF's, 10 buried, and
Toggle options. 
All FF's have ASYNC clocks, and OE terms.
 This is the smartest 24 pin PAL we have found, and we have fitted i2c
slave designs into ir.

-- 
===============  Serious Design Tools for PLD and uC  ================
= HDL PLD design libraries, Source & Sim - HDLapIO and HDL_Pro
= Chip Vendor independant designs & tools
= for more info, Email : DesignTools@xtra.co.nz   Subject : HDLapPacks

Article: 5061
Subject: Re: ASICs Vs. FPGA in Safety Critical Apps.
From: Jan Vorbrueggen <jan@mailhost.neuroinformatik.ruhr-uni-bochum.de>
Date: 18 Jan 1997 11:37:50 +0100
Links: << >>  << T >>  << A >>
David Erstad <erstad@ssec.honeywell.com> writes:

> I think most people and designing and flying space systems would be very 
> uncomfortable with the concept that a part would latch up and it would
> be "OK".

Clementine did, IIRC. (The rad-hard 1802 watched the current drawn by the
R3000, and if ti started to rise, reset the processor.) And the reason for the
final, ugh, "off-nominal" behaviour had nothing to do with this.

	Jan
Article: 5062
Subject: Re: advice request
From: Richard Schwarz <AAPS@EROLS.COM>
Date: Sat, 18 Jan 1997 10:55:55 -0500
Links: << >>  << T >>  << A >>
Ben Chaffin wrote:
> 
> Hello,
>         I am a computer science major at Williams College. I am working
> on a computational number theory problem for one of my professors. I have
> implemented it in C and optimized it to death and am now considering
> building a special purpose computer to speed it up. So, along with
> everyone else, I'm trying to find the best balance between a limited
> budget and speed. I have a design for discrete chips that should run
> about 5 times as fast as a P-120. The cost is $300-$400, which isn't bad,
> but the circuit involves nearly 200 chips, with about 3000 pins. That's a
> lot of wiring. The algorithm involves:
> 
>         Two parallel 128-bit additions of internal-only data
>         Storage for the result (256 bits)
>         Two 48-bit conditional inversions using XOR gates
>         Two priority encodings across 48 bits
>         Some counters, flip-flops and simple control circuitry
>         Around 50 bits of output
> 
>         It's not a particularly complicated circuit, just large. I've
> been looking at Motorola's MPA1036 chips -- I think that two of these
> would do the job, and leave plenty of room for neat parallel addition
> algorithms and such. I think I would be able to borrow an EPROM
> programmer to store the configuration. But I'm pretty new to digital
> hardware in general, and especially to FPGA's. So here are my questions:
> 
>         Are there faster/cheaper/better chips than the MPA series for
> this type of computation? I have nothing available to me except computers
> -- no commercial design software. Reprogrammability is a plus, since I'm
> new and the chances I'll get everything right the first time are slim.
> 
>         Is there a way to import a schematic from Chipmunk's diglog into
> a layout program such as Motorola's?
> 
>         Does anybody have an estimate of how long a 128-bit ripple-carry
> addition would take? (i.e., is it worth it to use FPGA's?)
> 
>         Any answers, advice, or general comments would be useful. Thanks
> very much.
> 
>         Ben Chaffin
>         98bcc@williams.edu



Ben,

Check out the XILINX kits at http://www.erols.com/aaps they are very 
reasonable and include a decent utility board. The board takes chips 
which can implement up to 16,000 gates! More powerful boards are also 
available. I designed the boards and use them for many of my FPGA 
projects. We are offering substantial discounts on XILINX FOUNDATION 
software. This package includes VHDL, SCHEMATIC CAPTURE, POST ROUTE 
simulation and the fitter/router software. The software is packaged 
extremely well and X84 board comes with C templates which make 
implementing the designs and real time control as painless as possible, 
and the board is set up so as to be able to take any  FPGA from 2k to 
16K gates in an 80 pin PLCC package with all pins going to 20 pin 
connectors which are set up in the HP logic analyzer format. It really 
is a helpful tool, and I would reccomend these kits highly to both 
experienced and unexperiences FPGA developers. The board and C drivers 
are also available seperatly for $275.00 and come with the C drivers and 
a 5202 FPGA which is socketed and can be replaced with chips upto 16,000 
gates. The C drivers come with download code so that the design can be 
loaded from the PC or via EPROM or via XILINX download cable if the 
board is used standalone, outside the PC.


Richard Schwarz 

Richard
Article: 5063
Subject: [Q] Xilinx FPGA Resources
From: David Charles Hirschfield <dch+@andrew.cmu.edu>
Date: Sat, 18 Jan 1997 19:01:36 -0500
Links: << >>  << T >>  << A >>
I'm currently working on a project that requires continuous programming
and reprogramming of a Xilinx XC4000 FPGA board.

Due to some strange setup requirements, we are not going to be able to
directly use the xchecker application and cable to program the board.

Does anyone have any information regarding the technical details of
programming Xilinx boards?

Any help would be greatly appreciated,
-David Hirschfield

+-===========================================================================-+
|                                                --== e-mail ==--             |
|     _/_/_/      _/_/    _/   _/  _/_/_/      dch+@andrew.cmu.edu            |
|    _/    _/  _/    _/  _/  _/   _/                   or                     |
|   _/    _/  _/_/_/_/  _/ _/    _/_/       cddch@paleo.giss.nasa.gov         |
|  _/    _/  _/    _/  _/_/     _/                                            |
| _/_/_/    _/    _/  _/       _/_/_/             --== WWW ==--               |
|                                     http://www.contrib.andrew.cmu.edu/~dch/ |
+-===========================================================================-+
 
Article: 5064
Subject: xact V6.0.1 & smartdrive cache....
From: stuart.summerville@practel.com.au (Stuart Summerville)
Date: Sun, 19 Jan 1997 05:12:35 GMT
Links: << >>  << T >>  << A >>
Anyone had any problems using xact v6.0.1 under win3.x when using
smartdrive? I've just found that loading smartdrive with the /x option
(no write behind cache)  fixed some/all of my crashing problems
experienced when compiling my source.

Stu.
Article: 5065
Subject: Re: Meta Assembler wanted
From: Leon Heller <leon@lfheller.demon.co.uk>
Date: Sun, 19 Jan 1997 09:30:47 +0000
Links: << >>  << T >>  << A >>
In article <32DE9AC9.79CA@hns.com>, "Thomas J. Loftus" <tloftus@hns.com>
writes
>Hello everybody,
>
>Does anyone know of a meta-assembler which might be cheaper/newer than
>the one offered by Hilevel Technology Inc.?  I want to generate
>programs for an ASIC embedded sequencer with a custom instruction set.
>
>For those who aren't familiar with a meta-assembler, it's a software
>product which can be used to define instruction sets for custom
>sequencers.  Years ago, I used it for 2900 family sequencers,
>now I would like to use something like it in an ASIC design for
>an embedded sequencer, similar to a co-processor.
>
>The one from Hilevel (HALE) is old, runs on DOS and they quote $2300.
>I think I can do better than this and hope somebody can help me.
>
>Please reply directly to tloftus@hns.com because I don't get to 
>these newsgroups very often.


You will find a free table-driven cross-assembler on the Simtel archive.
Look for TASM301.zip. You can create your own table for any instruction
set. There is a similar commercial product called Cross-32, which is
quite inexpensive ($500, perhaps). Cross-32 is suitable for 8, 16 and
32-bit processors, TASM might only be 8-bit.

Leon
-- 
Leon Heller, G1HSM
leon@lfheller.demon.co.uk
Tel: +44 (0) 118 947 1424 (home)
     +44 (0) 1344 385556 (work)
Article: 5066
Subject: Re: ASICs Vs. FPGA in Safety Critical Apps.
From: CoxJA@augustsl.demon.co.uk (Julian Cox)
Date: Mon, 20 Jan 1997 09:13:56 GMT
Links: << >>  << T >>  << A >>
ees1ht@ee.surrey.ac.uk (Hans Tiggeler) wrote:

>In article <01bc0161$d44246a0$6e0db780@Rich>, rich.katz@gsfc.nasa.gov says...
>>
>>In the near term, there will be an increasing number of NASA missions
>>flying DRAMs, including some long-term missions like Cassini to Saturn.  
>>
>>The easiest codes to work with are the Hamming codes, with most
>>implementations correcting single-bit errors and detecting double-bit
>>errors.  These codes permit easy correction on the fly and are very simple
>>to implement (and if one has time it can be done if FPGAs).  
>>If not, then Reed-Solomon or some other code would be
>>required, which are more difficult to decode.  
>
>Given the price of S/DRAM memories why not use a majority voting system such as TMR (Triple 
>Modular Redundancy). The system can handle multiple bit upsets and is very easy to implement. I 
>have currently a system running in an Actel1020 (17% used!). Of course the drawback is that you 
>have to either triplicate your memory or add a statemachine to perform three memory read cycles 
>per CPU read cycle.
>
Or use 3 processors / FPGA's to give you fault tolerance there too.
;-)



Article: 5067
Subject: Re: advice request
From: CoxJA@augustsl.demon.co.uk (Julian Cox)
Date: Mon, 20 Jan 1997 11:31:25 GMT
Links: << >>  << T >>  << A >>
Ben Chaffin <98bcc@williams.edu> wrote:

>Hello,
>        I am a computer science major at Williams College. I am working
>on a computational number theory problem for one of my professors. I have
>implemented it in C and optimized it to death and am now considering
>building a special purpose computer to speed it up. So, along with
>everyone else, I'm trying to find the best balance between a limited
>budget and speed. I have a design for discrete chips that should run
>about 5 times as fast as a P-120. The cost is $300-$400, which isn't bad,
>but the circuit involves nearly 200 chips, with about 3000 pins. That's a
>lot of wiring. The algorithm involves:
>
>        Two parallel 128-bit additions of internal-only data
>        Storage for the result (256 bits)
>        Two 48-bit conditional inversions using XOR gates
>        Two priority encodings across 48 bits
>        Some counters, flip-flops and simple control circuitry
>        Around 50 bits of output
>
>        It's not a particularly complicated circuit, just large. I've
>been looking at Motorola's MPA1036 chips -- I think that two of these
>would do the job, and leave plenty of room for neat parallel addition
>algorithms and such. I think I would be able to borrow an EPROM
>programmer to store the configuration. But I'm pretty new to digital
>hardware in general, and especially to FPGA's. So here are my questions:
>
>        Are there faster/cheaper/better chips than the MPA series for
>this type of computation? I have nothing available to me except computers
>-- no commercial design software. Reprogrammability is a plus, since I'm
>new and the chances I'll get everything right the first time are slim.
>
>        Is there a way to import a schematic from Chipmunk's diglog into
>a layout program such as Motorola's?
>
>        Does anybody have an estimate of how long a 128-bit ripple-carry
>addition would take? (i.e., is it worth it to use FPGA's?)
>
>        Any answers, advice, or general comments would be useful. Thanks
>very much.
>
>        Ben Chaffin
>        98bcc@williams.edu
>
First of all, I'm gonna speak here about 3 vendors only.  That's not
because I think they are the fastest, cheapest or better than the
rest, it's just that these are the ones I use.

One of the vendors for which I have no experience is Motorola so I
cannot comment there but I do use Actel, Altera and Cypress daily. 
 
The biggest factor in the cost of a discrete vs FPGA solution is
probably gonna be the tools.  The cheapest way into the wonderful
world of programmable silicon that I know of is the Cypress Warp
VHDL..  Get in touch with a Cypress rep & you should be able to get on
a 1 day VHDL training course with a kit to take home for $99. There is
no limit to the size of parts you can use with this budget kit, if
Cypress make 'em, you can use 'em.  There is, however, no schematic
entry of any kind, you'll have to rewrite your entire design in VHDL.

If you wanna get the job done fast, then (IMHO) you should use Altera.
Their Max+Plus 2 system is all you need.  The Flex 10K series is
probably the right place to start for your design, the compiler will
tell you which part you need.  The other big advantage of Altera is
price (of silicon) their price cutting is quite astonishing.  Take a
look at their web site / press releases to get an idea.  If you do go
for Altera don't use their Configurator Proms, use Atmel's E^2 parts.

Actel would probably give you the greatest speed of operation of these
three vendors.  Their ACT3 parts are damn fast but I guess you would
be better with the 3200DX family 'cos those have SRAM bits built in.
The tools are expensive but for entry you can use OrCad who definitely
do give academic discount & give you your best chance of Chipmunk
import.

In summary, the advantages of these three vendors are:

Cypress:
	Cheap tools
	Huge range of devices (16V8 PAL to 20K gate FPGA from same tools)
	Re programmable on some ranges.

Altera:
	Best suite of integrated tools.
	Cheap silicon
	Re programmable.

Actel:
	Fastest operation
	Cheapest schematic entry via OrCad.


Few more point spring to mind b4 I sign off.

There is some research being done on compiling C directly to FPGA's, I
have a link to Oxford Uni in my bookmarks but I havn't checked it out
for months.  Try: http://www.comlab.ox.ac.uk/oucl/hwcomp .

OrCad are about to release a new product called Express & were looking
for beta testers a few weeks back.  Look for thread 'Consulting
Opportunity' in this group or comp.lang.vhdl (Try www.dejanews.com ).
Express gives you schematic & vhdl entry but you still need a vendors
compiler.

The best starting point I have found on the web for info on FPGAs is:
	Programmable Logic Jump Station (FPGA, CPLD)
	http://pw1.netcom.com/~optmagic/index.html

Hope that lot helps

TTFN

Julian


Article: 5068
Subject: Re: Meta Assembler wanted
From: RODNEYM@rodneym.ibm.net (Rodney Myrvaagnes)
Date: 20 Jan 1997 14:27:25 GMT
Links: << >>  << T >>  << A >>
On Sun, 19 Jan 1997 09:30:47 +0000, Leon Heller <leon@lfheller.demon.co.uk> wrote:
>>
>>For those who aren't familiar with a meta-assembler, it's a software
>>product which can be used to define instruction sets for custom
>>sequencers.  Years ago, I used it for 2900 family sequencers,

He needs something that puts out arbitrary word-length microcode. 
A cross assembler won't do unless his microcode word just happens
to be 32 bits, for instance.

>You will find a free table-driven cross-assembler on the Simtel archive.
>Look for TASM301.zip. You can create your own table for any instruction
>set. There is a similar commercial product called Cross-32, which is
>quite inexpensive ($500, perhaps). Cross-32 is suitable for 8, 16 and
>32-bit processors, TASM might only be 8-bit.

-- 
Rodney Myrvaagnes    Associate Editor, Electronic Products
rodneym@ibm.net        516-227-1434        Fax 516-227-1444
When possible, sailing J36 Gjo/a

Article: 5069
Subject: Re: Able to reverse a .JED back to logic?
From: ecla@world.std.com (alain arnaud)
Date: Mon, 20 Jan 1997 14:37:40 GMT
Links: << >>  << T >>  << A >>
Viewlogic has a tool called JED2VHDL which generates a structural
VHDL model from a JEDEC file. I do not know if it's still available
in  ViewOffice but used to be available with ViewPLD to allow for
simulation of PAL and PLD devices.


Encore Electronics (encore@shell.global2000.net) wrote:
: Greetings... our customer generates JEDEC files for boards we build for
: him, and we do the programming of Lattice in-system-programmable parts on
: the board. I've got an older version of the code in .JED format, and a
: newer version he just created from scratch after losing the original
: source logic code (.LIF or .LDF files). The newer version naturally
: doesn't work, and we don't know exactly what's in the older one. 

: Is there a way to reverse-compile the working .JED file to something
: humanly-understandable that we can tweak, so he can re-compile it again? 
: I'm not too keen on the idea of taking the fuse map and writing out on a
: sheet of paper to see what fuses are blown and open, and then figuring out
: the logic from that.

: The part in question is the ispLSI1032... fortunately we haven't had this
: problem (yet) with the much larger ispLSI3256 he's also using on the
: board.

: Help?

: Tom Moeller
: Encore Electronics

Article: 5070
Subject: AMD names programmable logic company
From: Scott Thomas <scott.thomas@vantis.com>
Date: Mon, 20 Jan 1997 06:42:05 -0800
Links: << >>  << T >>  << A >>
SUNNYVALE, CA-JANUARY 20, 1997-AMD today announced that Vantis
Corporation will be the name of its programmable logic company. 

Vantis is being formed to better serve the specialized requirements of
programmable logic customers. The company designs, develops and
markets programmable logic products, taking advantage of AMD's advanced
process technology and wafer manufacturing and packaging
capabilities. 

The naming of Vantis Corporation is the latest milestone in a process
that began last year when AMD announced the intent to form a
programmable logic subsidiary. Since then, Vantis has established a
worldwide dedicated sales force and a dedicated team of field
applications
engineers. 

"Vantis is a 28 year-old startup backed by a multi-billion dollar parent
company with world-class manufacturing capabilities, leading-edge
technology and a global scope," said Rich Forte, president of Vantis
Corporation. "We are already one of the industry's leading suppliers of
programmable logic. By forming Vantis we will be able to focus more on
offering superior, cost-effective solutions that allow our customers to
get to market before their competition. Vantis is committed to make its
customers winners." 

Vantis also unveiled its new logo and worldwide web site at
www.vantis.com. The logo can be viewed and downloaded from the web site.
Vantis' new toll-free number is 

                1-888-826-8472 (VANTIS2). 

Vantis will continue to utilize AMD's worldwide Distributors. In
addition, Vantis will receive administrative support functions from AMD.
All
customer support processes, such as order entry and technical support to
name a few, will remain the same so that customers are ensured the
same high-level of service that they receive today.

About AMD 

AMD is a global supplier of integrated circuits for the personal and
networked computer and communications markets. AMD produces
processors, flash memories, programmable logic devices, and products for
communications and networking applications. Founded in 1969
and based in Sunnyvale, CA, AMD had revenues of $2 billion in 1996.
(NYSE: AMD). 

Vantis and the Vantis logo are trademarks and AMD, the AMD logo and MACH
are registered trademarks of Advanced Micro Devices, Inc.
GENERAL NOTICE: Product names used in this publication are for
identification purposes only and may be trademarks of their respective
companies.
Article: 5071
Subject: Re: advice request: CPUs vs. FPGAs again
From: "Jan Gray" <jsgray@acm.org>
Date: 20 Jan 1997 17:58:04 GMT
Links: << >>  << T >>  << A >>
[This article is a repost of <01bc0692$176777e0$4e7dbacd@p5-166>,
cleaned up and with corrections regarding use of XC4000 H-muxes.]

Ben Chaffin <98bcc@williams.edu> wrote in article
<32E01194.778E@williams.edu>...
> Hello,
>         I am a computer science major at Williams College. I am working
> on a computational number theory problem for one of my professors. I have
> implemented it in C and optimized it to death and am now considering
> building a special purpose computer to speed it up. So, along with
> everyone else, I'm trying to find the best balance between a limited
> budget and speed. I have a design for discrete chips that should run
> about 5 times as fast as a P-120. The cost is $300-$400, which isn't bad,
> but the circuit involves nearly 200 chips, with about 3000 pins. That's a
> lot of wiring. The algorithm involves:
> 
>         Two parallel 128-bit additions of internal-only data
>         Storage for the result (256 bits)
>         Two 48-bit conditional inversions using XOR gates
>         Two priority encodings across 48 bits
>         Some counters, flip-flops and simple control circuitry
>         Around 50 bits of output

Hooray for such questions, which help to give us insight
into the categorization of problems that are better done
on FPGAs than on general purpose computers.  In this
case, we compare a fast implementation on a general
purpose computer with a fast implementation on an
FPGA (a XC4010E-2) which seems well suited for the
problem.  I hope the discussion is of some indirect
use to Mr. Chaffin.

Let's consider how well this problem could run on a
superscalar 64-bit microprocessor.  The description
above would seem to require 2 adds, 2 adds-with-carry,
2 complements, 2 conditional moves, and 2 instances
of a priority encoder.  Each of the latter could be
implemented as two iterations of binary search followed
by a table lookup, e.g. as a shift, branch, shift, branch,
load byte (4KB table), add constant.  That's 30-40
clocks on a 2-integer issue machine, assuming all
four branches are mispredicted with a 5-cycle
misprediction penalty (as on a DEC Alpha 21164),
and mostly L1 cache hits.  With a 3 ns clock, that's
about 100 ns per iteration.

Now let's consider an FPGA implementation,
specifically a simpleminded, non-pipelined, no-sharing,
approach.  We'll use a Xilinx XC4010E-2 (20x20 CLBs),
$110 quantity one according to www.marshall.com.

128-bit adder:
Slow approach: 64 CLBs configured as a 128-bit fast
ripple carry adder (3+ columns of CLBs).
Latency approx. 3+64*.6 =42 ns.

Faster approach: 16+2*16+2*16+2*16 CLBs configured
as a 32+32+32+32-bit carry select adder.
Latency approx. 3+16*.6+5 gate delays+slop=25 ns.

[By the way, if a CLB's F-LUTs and fast carry
circuits are configured as a 2-bit fast ripple carry adder,
can you still use its H-LUT to multiplex one of its
F-LUT outputs with another signal, driving *that*
onto a X or Y output?]

Assuming H-muxes to multiplex the intermediate sums,
two such adders require about 240 CLBs, more than
half of the chip resources.  You can store the sums
in the flip-flops in the same CLBs.

(Note, we did not discuss the adder input values.
We may need to store an additional 256 or 512
bits of input state.)

Two 48-bit inversions require a total of 48 CLBs
if you can't fold them in elsewhere.
Latency: 2-3 ns of logic plus some interconnect delay.

48-bit priority encoder: 
A 4-bit encoder is 1.5 CLBs.
A 16-bit encoder is about 10 CLBs: 4 4-bit encoders
(6 CLBs) + about 4 CLBs to combine them.
A 48-bit encoder is about 36 CLBs: 3 16-bit encoders
(30 CLBs) + about 6 CLBs to combine them.
Therefore, two encoders require 72 CLBs.
Latency is about 4 gate delays (8 ns) plus
some interconnect delay => 15 ns.

Altogether, a non-pipelined implementation could
complete an iteration in 50 ns.  If there is a way to
overlap or pipeline the adder and the priority encoder,
you might get this down to 30 ns.

This is 2-3X faster than the general purpose CPU
approach.  However, it is also instructive to compare
the approximate latencies of the individual operations.

Adds: 6 ns (CPU); 25 ns (FPGA).
Inversions: 6 ns (CPU); <10 ns (FPGA).
Priority encoders: 90 ns (CPU); 15 ns (FPGA).

The CPU loses badly because of the expense of the
priority encoder implementation.  So, one approach
to the problem is to find a CPU with a priority
encoder (e.g. the Microunity Mediaprocessor),
or a not-yet-existent CPU with a little bit of
programmable logic in its datapath, with which
to implement a priority encoder...

This circuit just fits in the 400 CLBs of an XC4010E.
Two instances would fit in a XC4020E or XC4025E,
more in the promised XC4000EX family parts.
So, if this is not an inherently serial problem,
doing several iterations in parallel would show
a further performance advantage for the FPGA
approach.  Or, as Ray Andraka suggests in another
reply, if your problem tolerates a lot of parallelism,
a "massively" instanced bit serial approach will be
faster yet.

Jan Gray

Article: 5072
Subject: Re: ASICs Vs. FPGA in Safety Critical Apps.
From: milne@cv.com (Ewan D. Milne x3767)
Date: 20 Jan 1997 18:45:55 GMT
Links: << >>  << T >>  << A >>
Julian Cox (CoxJA@augustsl.demon.co.uk) wrote:
: Or use 3 processors / FPGA's to give you fault tolerance there too.
: ;-)

Of course, this does not always work.  Just ask the folks at Stratus
that built a fault-tolerant machine out of Intel i860 processors which
used a "random" TLB replacement strategy.  Hard to keep the processors
synchronized at the bus cycle level when the chip behavior is not
always deterministic.  Worth considering when choosing a chip.

[comp.sys.mentor removed from newsgroups]

--
----------------------------------------------------------------
Ewan D. Milne / Computervision Corporation  (milne@petra.cv.com)
Article: 5073
Subject: FPGA'97: Advanced Registration deadline in 2 days
From: hauck@eecs.nwu.edu (Scott A. Hauck)
Date: Mon, 20 Jan 1997 15:48:40 -0600
Links: << >>  << T >>  << A >>
-------------------------------------------------------------------------------
                          Advance Program

           1997 ACM/SIGDA Fifth International Symposium on
               Field-Programmable Gate Arrays (FPGA'97)

   Sponsored by ACM SIGDA, with support from Altera, Xilinx, and Actel

            Monterey Beach Hotel, Monterey, California
                      February 9-11, 1997
          (Web page: http://www.ece.nwu.edu/~hauck/fpga97)

------------------------------------------------------------------------------
Welcome to the 1997 ACM/SIGDA International Symposium on Field-Programmable
Gate Arrays (FPGA'97).  This annual symposium is the premier forum for
presentation of advances in all areas related to FPGA technology, and
also provides a relaxed atmosphere for exchanging ideas and stimulating
discussions for future research and development in this exciting new field.

This year's symposium sees a strong increase of interest in FPGA
technology, with over 20% increase in paper submissions.  The technical
program consists of 20 regular papers, 35 poster papers, an evening panel,
and an invited session.   The technical papers present the latest results
on advances in FPGA architectures, new CAD algorithms and tools for FPGA
designs, and novel applications of FPGAs.  The Monday evening panel
will debate whether reconfigurable computing is commercially viable.
The invited session on Tuesday morning addresses the challenges for
architecture development, CAD tools, and circuit design of
one million-gate FPGAs and beyond.

We hope that you find the symposium informative, stimulating, and enjoyable.

Carl Ebeling, General Chair
Jason Cong, Technical Program Chair
------------------------------------------------------------------------------

SYMPOSIUM PROGRAM

Sunday February 9, 1997

6:00pm  Registration

7:00pm  Welcoming Reception,
        Monterey Beach Hotel, Monterey

Monday February 10, 1997

7:30am  Continental Breakfast/Registration

8:20am  Welcome and Opening Remarks

Session 1:  FPGA Architectures
Session Chair:  Rob Rutenbar, Carnegie Mellon Univ.
Time:   8:30 - 9:30am

1.1   "Architecture Issues and Solutions for a High-Capacity FPGA",
      S. Trimberger, K. Duong, B. Conn, Xilinx, Inc.

1.2   "Memory-to-Memory Connection Structures in FPGAs with Embedded Memory
      Arrays",
      Steven J.E. Wilton, J. Rose, Z.G. Vranesic, University of Toronto

1.3   "Laser Correcting Defects to Create Transparent Routing for Large Area
      FPGAs",
      G.H. Chapman, B. Bufort, Simon Fraser University

Poster Session 1: Analysis and Design of New FPGA Architectures
Session Chair:  Tim Southgate, Altera, Inc.
Time:   9:30 - 10:30am (including coffee break)

Session 2:  FPGA Partitioning and Synthesis
Session Chair:  Richard Rudell, Synopsys, Inc.
Time:   10:30 - 11:30am

2.1  "I/O and Performance Tradeoffs with the FunctionBus during Multi-FPGA
     Partitioning",
     F. Vahid, University of California, Riverside

2.2  "Partially-Dependent Functional Decomposition with Applications in FPGA
     Synthesis and Mapping",
     J. Cong, Y. Hwang, Univ. of California, Los Angeles

2.3  "General Modeling and Technology-Mapping Technique for LUT-based FPGAs",
     A. Chowdhary, J.P. Hayes, University of Michigan

Poster Session 2:  Logic Optimization for FPGAs
Session Chair:  Martine Schlag, Univ. of California, Santa Cruz
Time:   11:30 - 12noon

Lunch:  noon - 1:30pm

Session 3:  Rapid Prototyping and Emulation
Session Chair:  Carl Ebeling, Univ. of Washington
Time:   1:30 - 2:30pm

3.1  "The Transmogrifier-2: A 1 Million Gate Rapid Prototyping System",
     D.M. Lewis, D.R. Galloway, M. V. Ierssel, J. Rose, P. Chow,
     University of Toronto

3.2  "Signal Processing at 250 MHz using High-Performance Pipelined FPGA's",
     Brian Von Herzen, Rapid Prototypes, Inc.

3.3  "Module Generation of Complex Macros for Logic-Emulation Applications",
     Wen-Jong Fang, Allen C.H. Wu, Duan-Ping Chen, Tsinghua University

Poster Session 3: Novel FPGA Applications
Session Chair:  Brad Hutchings, Brigham Young Univ.
Time:   2:30 - 3:30pm (including coffee break)

Session 4:  Reconfigurable Computing
Session Chair:  Jonathan Rose, Univ. of Toronto
Time:   3:30 - 4:30pm

4.1  "Wormhole Run-time Reconfiguration",
     R. Bittner, P. Athanas, Virginia Polytechnic Institute

4.2  "Improving Computational Efficiency Through Run-Time Constant
     Propagation",
     M.J. Wirthlin, B.L. Hutchings, Brigham Young University

4.3  "YARDS: FPGA/MPU Hybrid Architecture for Telecommunication Data
     Processing",
     A. Tsutsui, T. Miyazaki, NTT Optical Network System Lab.

Poster Session 4:  Reconfigurable Systems
Session Chair: Scott Hauck, Northwestern Univ.
Time:   4:30 - 5:30pm

Dinner: 6:00 - 7:30pm

Evening Panel: Is reconfigurable computing commercially viable?
Moderator: Herman Schmit, Carnegie Mellon Univ.
Time:   7:30 - 9:00pm

Panelists:
Steve Casselman: President, Virtual Computer Corp.
Daryl Eigen: President, Metalithic Systems, Inc.
Robert Parker: Deputy Director, ITO, DARPA
Peter Athanas: Assistant Professor, Virginia Polytechnic Institute
Robert Colwell: Pentium Pro Architecture Manager, Intel Corp.

In this panel session, we will try to address the questions of whether
there will be a mass-market for FPGA-based computing solutions.  Are
there large sets of applications whose performance requirements far
exceed that offered by microprocessors but which are only
occasionally executed?  Where are these applications?  Does the
ability to reconfigure during execution change the cost and
performance benefits of reconfigurable hardware significantly?  What
are the key challenges to making reconfigurable computing a reality,
and what can PLD manufacturers, system houses, government, and
academia do to overcome these obstacles?

Session 5:  FPGA Floorplanning and Routing
Session Chair:  Dwight Hill, Synopsys, Inc.
Time:   8:30 - 9:30am

5.1  "Synthesis and Floorplanning for Large Hierarchical FPGAs",
     H. Krupnova, C. Rabedaoro, G. Saucier, Institut National Polytechnique de
     Grenoble/CSI

5.2  "Performance Driven Floorplanning for FPGA Based Designs",
     J. Shi, Dinesh Bhatia, University of Cincinnati

5.3  "FPGA Routing and Routability Estimation Via Boolean Satisfiability",
     R.G. Wood, R.A. Rutenbar, Carnegie Mellon University

Poster Session 5: High level Synthesis and Module Generation for FPGAs
Session Chair:  Martin Wong,  Univ. of Texas at Austin
Time:   9:30 - 10:30am (including coffee break)

Session 6 (Invited):  Challenges for 1 Million-Gate FPGAs and Beyond
Session Chair:  Jason Cong, Univ. of California, Los Angeles
Time:   10:30am - noon

Process technology advances tell us that the one million gate FPGA will soon
be here, and larger devices shortly after that.  Current architectures
will not extend easily to this scale because of process characteristics and
because new opportunities are presented by the increase in available
transistors.  In addition, such large FPGAs will also present significant
challenges to the computer-aided design tools and methods.
Two invited papers address these issues.

6.1  "Architectural and Physical Design Challenges for One Million Gate FPGAs
     and Beyond",
     Jonathan Rose, University of Toronto, Dwight Hill, Synopsys, Inc.

6.2. "Challenges in CAD for the One Million-Plus Gate FPGA",
      Kurt Keutzer, Synopsys, Inc.

Lunch:  noon - 1:30pm

Session 7:  Studies of New FPGA Architectures
Session Chair:  Steve Trimberger, Xilinx, Inc.
Time:   1:30 - 2:30pm

7.1  "A CMOS Continuous-time Field Programmable Analog Array",
     C.A. Looby, C. Lyden, National Microelectronics Research Center

7.2  "Combinational Logic on Dynamically Reconfigurable FPGAs",
     D. Chang, M. Marek-Sadowska, Univ. of California, Santa Barbara

7.3  "Generation of Synthetic Sequential Benchmark Circuits",
     M. Hutton, J. Rose, D. Corneil, University of Toronto

Poster Session 6:  FPGA Testing
Session Chair:  Sinan Kaptanoglu, Actel, Inc.
Time:   2:30 - 3:30pm (including coffee break)

Session 8:  Novel Design and Applications
Session Chair:  Pak Chan, Univ. of California, Santa Cruz
Time:   3:30 - 4:10pm

8.1  "Synchronous Up/Down Binary Counter for LUT FPGAs with Counting Frequency
     Independent of Counter Size",
     A.F. Tenca, M. D. Ercegovac,  Univ. of California, Los Angeles

8.2  "A FPGA-based Implementation of a Fault Tolerant Neural Architecture for
     Photon Identification"
     M. Alderight, E.L. Gummati, V. Piuri, G.R. Sechi, Consiglio Nazionale delle
     Ricerche, Universita degli Studi di Milano, Politecnico di Milano

4:30pm Symposium Ends.

-------------------------------------------------------------------------------
Organizing Committee:

General Chair:    Carl Ebeling, University of Washington
Program Chair:    Jason Cong, UCLA
Publicity Chair:  Scott Hauck, Northwestern University
Finance Chair:    Jonathan Rose, University of Toronto
Local Chair:      Pak Chan, UC Santa Cruz

Program Committee:

Michael Butts, Quickturn                   Pak Chan, UCSC
Jason Cong, UCLA                           Carl Ebeling, U. Washington
Masahiro Fujita, Fujitsu Labs              Scott Hauck, Northwestern Univ.
Dwight Hill, Synopsys                      Brad Hutchings, BYU
Sinan Kaptanoglu, Actel                    David Lewis, U. Toronto
Jonathan Rose, U. Toronto                  Richard Rudell, Synopsys
Rob Rutenbar, CMU                          Gabriele Saucier, Imag
Martine Schlag, UCSC                       Tim Southgate, Altera
Steve Trimberger, Xilinx                   Martin Wong, UT Austin
Nam-Sung Woo, Lucent Technologies
-------------------------------------------------------------------------------

Hotel Information

FPGA'97 will be held at the Monterey Beach Hotel, 2600 Sand Dunes Dr.,
Monterey, CA 93940 USA.  The phone number for room reservations is
1-800-242-8627 (from USA or Canada) or +1-408-394-3321 (fax: +1-408-393-1912).
Reservations must be made before January 10, 1997. Identify yourself with
the group: ACM/FPGA'97 to receive the special rates of US$75 single/double
for Gardenside and US$105 single/double for Oceanside (additional person
in the room is $10), plus applicable state and local taxes.

Reservations may be canceled or modified up to 72 hours prior to arrival
without a penalty.  If the cancellation is made within 72 hours of arrival,
or you fail to show up, first nights room and tax will be charged.  If a
modification is made within 72 hours of arrival (i.e., postpones arrival or
departs earlier than reserved) the actual nights of your stay will be charged
at the quoted rack rate for the room occupied.
Check-in time is 4:00 pm, and check-out time is 12:00 noon.

Directions by car:  From San Jose (1.5 hours) or San Francisco Airport (2.5
hours) take Hwy 101 South to Hwy 156 West to Hwy 1 South.  From Hwy 1 South,
take Seaside/Del Rey Oaks exit. The hotel is at this exit on the ocean side.

You can also fly directly to Monterey Airport, which is served by United,
American and other airlines with at least 8 flights per day.

Monterey Area

The Monterey Peninsula is famous for its many attractions and recreational
activities, such as John Steinbeck's famous Cannery Row and the Monterey Bay
Aquarium.  Also, play one of 19 championship golf courses.  Charter fishing
is available right at Firsherman's Wharf. Monterey is renowned worldwide for
its spectacular coastline, including Big Sur and the Seventeen Mile Drive.
Recreational activities, shopping opportunities and restaurants abound.
-------------------------------------------------------------------------------

Registration Information:

The Symposium registration fee includes a copy of the symposium proceedings,
a reception on Sunday evening, February 9, coffee breaks, lunch on both days,
and dinner on Monday evening, February 10.

First Name:_____________________Last Name:_________________________________

Title/Job Function:________________________________________________________

Company/Institution:_______________________________________________________

Address:___________________________________________________________________

City:___________________________State:_____________________________________

Postal Code:____________________Country:___________________________________

E-mail:_________________________ACM Member #:______________________________

Phone:__________________________Fax:_______________________________________

Circle Fee            Before January 22, 1997   After January 22, 1997

ACM/SIGDA Member      US$300                    US$370
*Non-Member           US$400                    US$470
Student               US$ 90 (does not include reception or banquet,
                              available for US$15 and US$55 respectively)

Guest Reception Tickets:   # Tickets _____x US$15 = ______
Guest Banquet Tickets:     # Tickets _____x US$55 = _______

Total Fees: _________________ (Make checks payable to ACM/FPGA'97)

Payment included (circle one): American Express  MasterCard  Visa  Check

Credit Card # :_______________________  Expiration Date:________

Signature:______________________________________________________

Send Registration, including payment in full, to:

FPGA'97, Meeting Hall, Inc.,
571 Dunbar Hill Rd.,
Hamden, CT 06514 USA
Phone/fax: +1 203 287 9555

For registration information contact Debbie Hall via e-mail at
halldeb@aol.com. Cancellations must be in writing and received
by Meeting Hall, Inc. before January 22, 1997.
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
|               Scott A. Hauck, Assistant Professor                         |
|  Dept. of ECE                        Voice: (847) 467-1849                |
|  Northwestern University             FAX: (847) 467-4144                  |
|  2145 Sheridan Road                  Email: hauck@ece.nwu.edu             |
|  Evanston, IL  60208                 WWW: http://www.ece.nwu.edu/~hauck   |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
Article: 5074
Subject: Re: Oscillator with PLD's or FPGA's
From: Roger Williams <roger@coelacanth.com>
Date: 20 Jan 1997 17:26:14 -0500
Links: << >>  << T >>  << A >>
>>>>> Peter Alfke <peter@xilinx.com> writes:

  > Xilinx XC3000 and XC31000 have an on-chip single-stage inverter,
  > intended for being wrapped around a 1 MHz to 40 MHz crystal, and
  > there are thousands of designs using this feature.  More recent
  > Xilinx architectures don't have this on-chip circuitry for the
  > following reasons:

    [marketing handwaving deleted]

Unfortunately, one important application for the XTL stage was as a
clock recovery VCXOs, which isn't available off the shelf (or for
"around one dollar" either).  I agree that oscillator design may be
beyond the capability of some digital designers, but there are
certainly plenty of competent datacomm engineers out here.

  > There have been no complaints from our customer base when we
  > deleted the crystal oscillator option from the newer
  > architectures...

Ummm...

-- 
Roger Williams                         finger me for my PGP public key
Coelacanth Engineering        consulting & turnkey product development
Middleborough, MA           wireless * DSP-based instrumentation * ATE
tel +1 508 947-8049 * fax +1 508 947-9118 * http://www.coelacanth.com/


Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search