Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 2800

Article: 2800
Subject: Re: Performance Benchmarks: Emulating FPGAs Using General Purpose Processors
From: jsgray@ix.netcom.com(Jan Gray)
Date: 9 Feb 1996 19:32:19 GMT
Links: << >>  << T >>  << A >>
In <1996Feb9.082016.20892@super.org> sc@vcc.com (Steve Casselman)
writes: 
>
>Another thread that would be good is just for some benchmark
>data on what people are doing now. For example if someone
>is doing neural nets you might post:
>
>I did a 12 neuron neural net that did 12 Billion connections/sec
>in 3000 gates and a ALPHA 330MegHz can do them @ 300 Million  
>connections/sec.  (no I did not do this it is just an example 
>of the detail of the proposed post)

A while back I was wondering about this in general: "how much better
are FPGAs at executing, in hardware, an arbitrary computation, than are
modern general purpose processors at executing, in software, the same
arbitrary computation?".  And so I wrote the following loose analysis
which might be interesting to discuss:


Emulating FPGAs Using General Purpose Processors
Jan Gray, 1995

It is well known that FPGAs can implement general purpose computers,
and 
general purpose computers can emulate FPGAs.  In this message I
consider how 
efficiently a general purpose processor can emulate an arbitrary FPGA 
circuit.  It arises from my curiosity regarding what performance
advantage 
(if any) the custom reconfigurable computing crowd has over general
purpose 
processors, to quantify, best case, how much faster an FPGA is at
arbitrary 
FPGA problems.


Setting aside such modern features as embedded SRAM, 3-state buses,
asynchronous flip-flop set/reset, and even flip-flop clock enables, we
model 
a typical lookup-table-based FPGA logic element as an arbitrary 4-input
logic 
function driving either another stage of logic or a flip-flop.  Without

fudging too much we can then consider that all multilevel logic
functions are 
pipelined with a register between each logic function.  This simplifies
my 
equivalence constrFrom: jsgray@ix.netcom.com(Jan Gray)
Newsgroups: comp.arch.fpga
Subject: Re: Performance Benchmarks: Emulating FPGAs Using General Purpose Processors
References:  <1996Feb9.082016.20892@super.org>

In <1996Feb9.082016.20892@super.org> sc@vcc.com (Steve Casselman)
writes: 
>
>Another thread that would be good is just for some benchmark
>data on what people are doing now. For example if someone
>is doing neural nets you might post:
>
>I did a 12 neuron neural net that did 12 Billion connections/sec
>in 3000 gates and a ALPHA 330MegHz can do them @ 300 Million  
>connections/sec.  (no I did not do this it is just an example 
>of the detail of the proposed post)

A while back I was wondering about this in general: "how much better
are FPGAs at executing, in hardware, an arbitrary computation, than are
modern general purpose processors at executing, in software, the same
arbitrary computation?".  And so I wrote the following loose analysis
which might be interesting to discuss:


Emulating FPGAs Using General Purpose Processors
Jan Gray, 1995

It is well known that FPGAs can implement general purpose computers,
and 
general purpose computers can emulate FPGAs.  In this message I
consider how 
efficiently a general purpose processor can emulate an arbitrary FPGA 
circuit.  It arises from my curiosity regarding what performance
advantage 
(if any) the custom reconfigurable computing crowd has over general
purpose 
processors, to quantify, best case, how much faster an FPGA is at
arbitrary 
FPGA problems.


Setting aside such modern features as embedded SRAM, 3-state buses,
asynchronous flip-flop set/reset, and even flip-flop clock enables, we
model 
a typical lookup-table-based FPGA logic element as an arbitrary 4-input
logic 
function driving either another stage of logic or a flip-flop.  Without

fudging too much we can then consider that all multilevel logic
functions are 
pipelined with a register between each logic function.  This simplifies
my 
equivalence constrrk concepts).  
Another approach is to build w/8 256-entry lookup tables, for each byte of 
bits in R, and "or" the contributions together:
    word R, LUA[w/8][256], A = 0;
    for (i = 0; i < w/8; i++)
	A |= LUA[i][(R>>(i*8)) % 256];
Written in-line this would require about w/8 shifts, masks, loads and "or"s.  
Let us assume the tables fit into d-cache and charge overall 4*w/8 == w/2 
instructions to arbitrarily map R into A.

For example, for w==64, the wordwise bit mapping approach would require 
approximately 120 instructions, whereas the table lookup approach would 
require approximately 32 instructions (including 8 loads).  Thus the four 
mappings A, B, C, D require about 4*32 == 128 instructions, including 32 
loads and about 8 KB of lookup tables.

The next sub-problem is to compute F(A,B,C,D).  To perform this in parallel 
across all n bits, it suffices to generate the 16 minterms M[0]=~A&~B&~C&~D, 
M[1]=~A&~B&~C&D, ..., M[15]=A&B&C&D and then combine them under 16 masks 
N[0..15]:
	R = M[0]&N[0] | M[1]&N[1] | ... | M[15]&N[15];
By reusing common subexpressions this can be done in about 60 instructions.

All totaled, on a w-bit processor, we need about 4*max(w/2, 20 lg w) + 60 
instructions to simulate one clocking of an arbitrary w-bit wide FPGA, as 
modeled.  Also of interest, the state required to describe the arbitrary 
w-bit FPGA of 4-input logic elements, is only about 16 + 4 lg w words.


Let’s try out this model against some real hardware!  A lowly XC4002A-5 has 
8x8 CLBs each with 2 function generators and two flip-flops, and with a clock 
to flip-flop output delay of 3 ns, an "interconnect delay" which I'll 
empirically and arbitrarily state is 17 ns, and a combined function generator 
plus flip-flop setup time of 5 ns, all totaled, let’s say 25 ns.  In summary, 
we won’t be far wrong (?) to say an ‘02A is a "128-bit FPGA" which you can 
clock at 40 MHz.  In comparison a 32x32 CLB XC4025-5 would be a "2048-bit 
FPGA".

In the general purpose processor corner, let’s employ an Alpha 21164A.  I 
recall this machine can issue 4 instructions per clock at 300 MHz.  Since 
w==64, it would take an Alpha about 200 instructions or about 50 issue slots 
(167 ns) to emulate one clocking of an arbitrary 64-bit FPGA.  Thus our Alpha 
might emulate half an ‘02A at a rate of about 6 MHz.

For another example, the BiCMOS MicroUnity media processor implementation can 
apparently perform 128-bit operations at 1000 MHz.  Here with w==128, it 
could emulate an arbitrary 128-bit FPGA at about 4 MHz, perhaps faster if we 
can take advantage of some of its special bit shuffle instructions.


Well!  These results surprised me.   Even executing a billion operations per 
second on 64- or 128-bit registers, these general purpose processors couldn’t 
achieve 10% of the speed*gates figure of a small FPGA.  Even if you consider 
my "12 ns" FPGA interconnect delay assumption far too generous given the 
arbitrary FPGA’s arbitrary interconnection topology, and derate it to 50 ns 
or more, the little FPGA is still several times faster at FPGA type work than 
a general purpose microprocessor.


(I also wonder if this presentation doesn’t suggest a slow, yet compact, 
physical FPGA implementation.  Attach a vector of flip-flops to the input and 
output sides of a "4-context" n-stage shuffle/copy/exchange interconnect, set 
up to route its input values according to 4 different routing configurations. 
The interconnect can be pipelined, and run at high speed, to map the 
flip-flop outputs into 4 registered function generator inputs and every 8 or 
9 clocks the flip-flop itself could be clocked.  Overall the pipeline could 
run at 200+ MHz and the overall FPGA at 25 MHz.)

Jan Gray
Redmond, WA



Article: 2801
Subject: Repost: Performance Benchmarks: Emulating FPGAs Using General Purpose Processors
From: jsgray@ix.netcom.com(Jan Gray)
Date: 9 Feb 1996 19:51:19 GMT
Links: << >>  << T >>  << A >>
Reposted after Netcom corrupted the first copy.

In <1996Feb9.082016.20892@super.org> sc@vcc.com (Steve Casselman)
writes: 
>>Another thread that would be good is just for some benchmark
>>data on what people are doing now. For example if someone
>>is doing neural nets you might post:
>>
>>I did a 12 neuron neural net that did 12 Billion connections/sec
>>in 3000 gates and a ALPHA 330MegHz can do them @ 300 Million  
>>connections/sec.  (no I did not do this it is just an example 
>>of the detail of the proposed post)

A while back I was wondering about this in general: "how much better
are FPGAs at executing, in hardware, an arbitrary computation, than are
modern general purpose processors at executing, in software, the same
arbitrary computation?".  And so I wrote the following loose analysis
which might be interesting to discuss:


Emulating FPGAs using general purpose processors
Jan Gray, 1995

It is well known that FPGAs can implement general purpose computers,
and general purpose computers can emulate FPGAs.  In this message I
consider how efficiently a general purpose processor can emulate an
arbitrary FPGA circuit.  It arises from my curiosity regarding what
performance advantage (if any) the custom reconfigurable computing
crowd has over general purpose processors, to quantify, best case, how
much faster an FPGA is at arbitrary FPGA problems.


Setting aside such modern features as embedded SRAM, 3-state buses,
asynchronous flip-flop set/reset, and even flip-flop clock enables, we
model a typical lookup-table-based FPGA logic element as an arbitrary
4-input logic function driving either another stage of logic or a
flip-flop.  Without fudging too much we can then consider that all
multilevel logic functions are pipelined with a register between each
logic function.  This simplifies my equivalence construction below and
allows us to compare against an FPGA clock rate based upon the sum of
flip-flop clock-to-output delay, delay through some interconnect, one
logic function delay, and the flip-flop setup time.

Thus, we model an FPGA as a vector of n 4-input function generators
F[i] driving n flip-flops R[i] which are in turn fed back into the
function generator inputs. In practice most FPGAs are incapable of
modeling arbitrary interconnect schemes between logic elements, instead
greatly favouring local interconnect between nearby logic elements. 
However, we’ll be overly generous and permit any function generator
F[i] to compute any function of any 4 flip-flop outputs:
	R[i]’ = F[i](R[a[i]], R[b[i]], R[c[i]], R[d[i]]);
for i, a[i], b[i], c[i], d[i] all in [0..n).

Note that a, b, c, d, are simple mappings which describe which
flip-flop outputs are inputs to each given function generator.  For
instance, if R[0]’ = R[2] xor R[3] xor R[5] xor R[7], we would have
a[0]=2, b[0]=3, c[0]=5, d[0]=7.

On a general purpose processor with word width w, we can implement the
flip-flop vector by partitioning it into w bit registers.  To simplify
the presentation, assume that n == w.  Then a simulation of one
clocking of the FPGA is to form A, B, C, D, each a mapping of R such
that
	A[i] == R[a[i]]
	B[i] == R[b[i]]
	C[i] == R[c[i]]
	D[i] == R[d[i]]
and finally compute
	R’[i] = F[i](A[i], B[i], C[i], D[i])
word-wise, over all the bit positions i simultaneously.

The first sub-problem is to compute 4 mappings of bits of R into A, B,
C, D efficiently.  One approach is to perform wordwise bit shifting,
xor, and mask operations upon R.  For instance, R can be bit-reversed
in 5 lg w simple RISC instructions, arbitrarily permuted in 10 lg w
instructions, and arbitrary mapped (any bit input to any bit output,
including arbitrary bit replication) in roughly 20 lg w instructions
employing 4 lg w constants.  (Knuth developed this by analogy from
exchange networks).  Another approach is to build w/8 256-entry lookup
tables, for each byte of bits in R, and "or" the contributions
together:
    word R, LUA[w/8][256], A = 0;
    for (i = 0; i < w/8; i++)
	A |= LUA[i][(R>>(i*8)) % 256];
Written in-line this would require about w/8 shifts, masks, loads and
"or"s.  Let us assume the tables fit into d-cache and charge overall
4*w/8 == w/2 instructions to arbitrarily map R into A.

For example, for w==64, the wordwise bit mapping approach would require
approximately 120 instructions, whereas the table lookup approach would
require approximately 32 instructions (including 8 loads).  Thus the
four mappings A, B, C, D require about 4*32 == 128 instructions,
including 32 loads and about 8 KB of lookup tables.

The next sub-problem is to compute F(A,B,C,D).  To perform this in
parallel across all n bits, it suffices to generate the 16 minterms
M[0]=~A&~B&~C&~D, M[1]=~A&~B&~C&D, ..., M[15]=A&B&C&D and then combine
them under 16 masks N[0..15]:
	R = M[0]&N[0] | M[1]&N[1] | ... | M[15]&N[15];
By reusing common subexpressions this can be done in about 60
instructions.

All totaled, on a w-bit processor, we need about 4*max(w/2, 20 lg w) +
60 instructions to simulate one clocking of an arbitrary w-bit wide
FPGA, as modeled.  Also of interest, the state required to describe the
arbitrary w-bit FPGA of 4-input logic elements, is only about 16 + 4 lg
w words.


Let’s try out this model against some real hardware!  A lowly XC4002A-5
has 8x8 CLBs each with 2 function generators and two flip-flops, and
with a clock to flip-flop output delay of 3 ns, an "interconnect delay"
which I'll empirically and arbitrarily state is 17 ns, and a combined
function generator plus flip-flop setup time of 5 ns, all totaled,
let’s say 25 ns.  In summary, we won’t be far wrong (?) to say an ‘02A
is a "128-bit FPGA" which you can clock at 40 MHz.  In comparison a
32x32 CLB XC4025-5 would be a "2048-bit FPGA".

In the general purpose processor corner, let’s employ an Alpha 21164A. 
I recall this machine can issue 4 instructions per clock at 300 MHz. 
Since w==64, it would take an Alpha about 200 instructions or about 50
issue slots (167 ns) to emulate one clocking of an arbitrary 64-bit
FPGA.  Thus our Alpha might emulate half an ‘02A at a rate of about 6
MHz.

For another example, the BiCMOS MicroUnity media processor
implementation can apparently perform 128-bit operations at 1000 MHz. 
Here with w==128, it could emulate an arbitrary 128-bit FPGA at about 4
MHz, perhaps faster if we can take advantage of some of its special bit
shuffle instructions.


Well!  These results surprised me.   Even executing a billion
operations per second on 64- or 128-bit registers, these general
purpose processors couldn’t achieve 10% of the speed*gates figure of a
small FPGA.  Even if you consider my "12 ns" FPGA interconnect delay
assumption far too generous given the arbitrary FPGA’s arbitrary
interconnection topology, and derate it to 50 ns or more, the little
FPGA is still several times faster at FPGA type work than a general
purpose microprocessor.


(I also wonder if this presentation doesn’t suggest a slow, yet
compact, physical FPGA implementation.  Attach a vector of flip-flops
to the input and output sides of a "4-context" n-stage
shuffle/copy/exchange interconnect, set up to route its input values
according to 4 different routing configurations.  The interconnect can
be pipelined, and run at high speed, to map the flip-flop outputs into
4 registered function generator inputs and every 8 or 9 clocks the
flip-flop itself could be clocked.  Overall the pipeline could run at
200+ MHz and the overall FPGA at 25 MHz.)

Jan Gray
Redmond, WA


Article: 2802
Subject: Re: Looking for OPAL, PALASM, PLAN
From: Andy Gulliver <andy.gulliver@crossprod.co.uk>
Date: Fri, 09 Feb 1996 14:29:24 -0800
Links: << >>  << T >>  << A >>
I don't know about the other two, but I wouldn't bother with PALASM - 
it's more trouble than it's worth!

-- 
Regards

AndyG

"Any opinions expressed herein are entirely my own and may or may not
have any connection with reality, virtual or otherwise."


Article: 2803
Subject: Re: New Reconfigurable Computing Threads.
From: Richard_Vireday@ccm.jf.intel.com (Richard Vireday)
Date: Fri, 09 Feb 1996 22:51:32 GMT
Links: << >>  << T >>  << A >>
sc@vcc.com (Steve Casselman) wrote:
>The threads on comp.arch.fpga have been anything but threads
>on reconfigurable computing. 
...snip snip...

> Do interconnect chips like Aptix and I-Cube belong in the mix?
YES. Definitely. You betcha.

> Will VHDL or Verilog be the programming language for reconfigurable computing
> or are some of the current C like compliers (tmcc, nlc) really the new wave?
I think if we quit calling it Verilog, and call it a HARD-C++ instead,
then it will start getting more mindshare.

>>Is floating point important or because current FPGAs don't have floating point
>>structures do we just through up our hands and keep to integer math?
Add an FPU co-processor slave, and save the FPGA real-estate for more
important stuff.

> What will it take to get reconfigurable computing off the ground?
The reconfigurable FPGA JAVA processor.  Say, what about modifying
Phil Friedin's small RISC into a JAVA interpreter?

>Steve Casselman
>Virtual Computer Corp.

More importantly Steve, and the whole point of this followup, what
kind of beer are you going to bring to FCCM'96 this year?  And what
shall I bring down from Portland?  :-)

--Richard Vireday


Article: 2804
Subject: Re: Help: Xilinx behavior if Power down
From: Scott Kroeger <Scott.Kroeger@mei.com>
Date: Fri, 09 Feb 1996 18:44:35 -0600
Links: << >>  << T >>  << A >>
Peter Wurbs wrote:
> 
> Hi Xilinx-Freaks,
> 
> I use an output of a XC3195A as a Chip-Select for a SRAM.
> The CS-Signal has a pullup to a battery-buffered voltage to
> maintain RAM data if the power is down. The FPGA is supplied
> by the unbuffered power supply.
> 
> It is the normal behavior of the IOB, that it is high impedance
> if the power voltage is below a certain level.
> But I could measure that the CS-signal is pulled to Low by the
> FPGA if the power voltage is near 0.8V. For less than 0.8 V it is
> o.k. again.
> But the precondition for the pullup to Vbatt is, that the FPGA output
> is high impedance over the full range of power supply.
> 
>                         | Vbatt (1.8V if power down)
>                         |
>                         |
>                         |
>                         -
>                        | |
>                        | |
>                        |_|
> |---------------|       |           |------|
> |               |       |       CS  |      |
> |   FPGA        |-------------------| RAM  |
> |               |                   |      |
> |---------------|                   |------|
>        |                               |
>        |                               |
>       VCC                             Vbatt
> 
> Is this behavior a property of Xilinx-FPGA's ?
> Or is it the problem, that I pull the output to Vbatt while VCC is low ?
> Can I avoid this problem ?


I don't understand how your circuit works at all when Vcc is removed.  
The Xilinx XC3100 family data sheet says , under absolute maximum 
ratings, Vth = -0.5 to Vcc+0.5V (referred to Vdd).  That is to say that 
you should not apply more than Vcc+0.5V (or less than -0.5V) to any pin.  
The static-clamp diodes begin to conduct at this point.  If Vcc = 0, the 
clamp diodes should pull your CS line to about 0.7V.

This is the typical output structure of CMOS devices in general, with 
some exceptions in level translating logic (3.3/5).  You could insert a 
gate in the CS path to block the Xilinx pin at powerdown.  I think there 
are CMOS devices available for operation down to 1.5V or so (don't 
remember where I saw them).

Regards,
Scott


Article: 2805
Subject: Xilinx is NOT specified MINIMUM delay -- is it right??
From: HIKIMA Toshio <hikima@hard2.takasaki.oki.co.jp>
Date: Sat, 10 Feb 1996 14:02:39 +0900
Links: << >>  << T >>  << A >>
Hi FPGAers,

I am using Xilinx XC4013. By the databook it is not specified
minimum delay.
My rep said, " there is no specificaton about minimum delay". Is it right?

If true, how can I design DRAM I/F? DRAM's spec has many complex constraient
so that it should be used minimum delay value.

Thanks,
--
T.Hikima


Article: 2806
Subject: Re: Chosing VHDL or Verilog Does Have An Impact For U.S. Engineers
From: davids8021@aol.com (DavidS8021)
Date: 10 Feb 1996 07:58:26 -0500
Links: << >>  << T >>  << A >>
I agree with with the assesment of John Cooley.  I was a DoD Engineer who
was laid off when "peace broke out."  My former company would rather hire
new grads and train them rather than allow their present employees to get
training.  I asked if I could train myself, I just need access to a
computer that would allow me to use VHDL.  I was told that since I was not
using this for a specific job, it would be illegal to give me an account. 
I know that was not true, the reason was engineers had to work day and
night because of not purchasing enough tools.  I was a CAE trainer on the
schematic capture and had to come in at 3:00 am to get my own work done.

When I called headhunters, the ONLY thing they were interested in was
buzzwords.  I was given advice early on that I should do a resume that
gave an overview of what I had done.  When I called the headhunter, she
said that I was not getting called because I had not used XYZ tool on
JKL's chip, or process.  I explained that I had done that and she said
that unless I mention all of the buzzwords that I had used, I would not
even get called.  I also found out that many companies have a database
with a search engine.   So when a hiring manager asks for someone who has
used XYZ tool on JKL's chip, the only resumes that get pulled are those
that match the search criteria.  I disagree with this concept, but that is
what is happening.  At my former Dead on Deck (DoD) company, the "expert"
ASIC designer had to ask why I had chosen specific A/D and D/A's to
interface to.  We were uning 2's complement data.  This "expert" did not
understand anything about 2's complement, but had the correct buzz words. 
This and another "expert" also told me that a signal was synchronous
because it was generated by a FF, even though it arrived at the desired
location from 1 to 3 clocks later.  ps  "expert"  one who has the correct
buzzwords.

I now work for a comercial company that belives in training its people.  
I let my manager choose my training, because I give him a long list of
what new tools and design languages that I want to learn and he adapts it
to what new business is comming in.  We have agreed that I will get all of
the training that I have desired.  I know that this will keep me
competetive.  I have also been called  and offered jobs that will pay more
that what I am presently getting.  But, I value my company's attitute of
training and consider that part of my pay.  If they didn't give me the
training, it would be much harder to get this training elsewhere.

The above makes me glad, but I know excellent engineers who cannot get a
job because they do not have the correct buzz words.  Their companies
would not give them training.  Training is very expensive for laid off
engineers.  Also the present resident of the white house said "we will
give training to laid off defense workers."  Unfortuantely, the training
involves learning phrases like the following:  "....and would like fries
with that."  I challenge the IEEE to give usefull training.  The AMA gives
training on the latest proceedures.  I am no longer a member because I am
tired of supporting acedemics who need to "publish or perish."  In our
office we have estimated that at most one in five acedemic papers are
usefull.  Also, I challenge universities to give training for people in
industry.  I have known of intelligent, hard working people who flunked
out of grad school because their job required 70 - 90 hours/week during a
semister.  How about offering classes that have a more flexible schedule
for people who are in industry.  I know that this will require some extra
work for the professors.  Maybe professors should be forced to work in
industry for a year where they are required to continue their research and
work 60 - 70 hours / week.  I also would like to see companies like
synopsys, Cadence, Mentor, etc  to make versions of their software that
will only do small designs, but that engineers could use to learn on their
home PC's.   Also make decent training tapes that do not cost a fortune to
purchase.  These tapes should be by people who actually have produced an
ASIC, or card, or box, for people who need to do these.  Having ... and
"left as an exercise for the reader" does not help an engineer who needs
the information NOW.  There is a difference in a class for higher learning
and learning an new tool, or design language.  Each has their place.  But,
please offer courses for those of us who need to keep up to date with the
latest technology.


Article: 2807
Subject: Re: Xilinx is NOT specified MINIMUM delay -- is it right??
From: Scott Kroeger <Scott.Kroeger@mei.com>
Date: Sat, 10 Feb 1996 09:14:27 -0600
Links: << >>  << T >>  << A >>
HIKIMA Toshio wrote:
> 
> Hi FPGAers,
> 
> I am using Xilinx XC4013. By the databook it is not specified
> minimum delay.
> My rep said, " there is no specificaton about minimum delay". Is it right?
> 
> If true, how can I design DRAM I/F? DRAM's spec has many complex constraient
> so that it should be used minimum delay value.

Correct, there is no specified minimum delay.  I don't know of many 
devices which do specify a minimum delay (for 74XX series devices they 
often specify a token minimum delay of 1ns).  I suggest you generate 
your RAS/Mux/CAS timing from a clocked state machine.  For brute force 
conservatism, set your cycle time (or delay by multiple clocks) so that 
a max delay on a state(n) output and zero delay on a state(n+1) output 
will still maintain your required minimum timing margin.

If you want to push your timing, somewhere in the Xilinx App notes is a 
mention of the degree of tracking between prop delays across the die.  
You can use this information to get a better idea of the actual delay 
tracking you can expect.

Regards,
Scott


Article: 2808
Subject: Altera Simulation
From: neal@ctd.comsat.com (Neal Becker)
Date: 10 Feb 1996 10:52:02 -0500
Links: << >>  << T >>  << A >>
I am just trying simulation on Altera for the first time.  I want to
initialize some internal registers.  Is Altera's simulator really not
capable of performing this basic function?  I can't seem to find any
way to do this!


Article: 2809
Subject: Re: PIC16C71 CORE for XC4000 ?
From: Eric Ryherd <eric@vautomation.com>
Date: 11 Feb 1996 17:54:11 GMT
Links: << >>  << T >>  << A >>
telkamp@eis.cs.tu-bs.de (Gerrit Telkamp) wrote:
>Hello,
>
>where can I get a PIC16C71-core (schematic or VHDL) for a XILINX XC4000 design ?
>

I'd be suprised if Microchip would let a schematic version of their
core out on the open market.

However, we have several microprocessors that can be targeted to FPGAs
(we prototype them in Xilinx 4000). If you not stuck on the PIC,
check out our cores...

- 
Eric Ryherd                eric@vautomation.com  
VAutomation Inc.           Synthesizable HDL Cores 
20 Trafalgar Square        http://www.vautomation.com
Suite 443 Nashua NH 03063  (603) 882-2282 FAX:882-1587




Article: 2810
Subject: Re: 8274 Inside FPGA?
From: Eric Ryherd <eric@vautomation.com>
Date: 11 Feb 1996 17:56:24 GMT
Links: << >>  << T >>  << A >>
prenato@trantor.it.pt (Paulo Oliveira) wrote:
>
>Hello !
>
>I'm developing a system in which I need a HDLC/LAPD controller to add 
>to my Intel 386 EX processor. Is there a chance to put the controller 
>inside a FPGA? Do you know a family of FPGAs whith such libraries? 

We don't have the 8274, but we are coming out with our HDLC core
very soon. Kind of expensive to include as an FPGA though, unless
your prototyping an ASIC...

Check out our WEB page for more...

-- 
Eric Ryherd                eric@vautomation.com  
VAutomation Inc.           Synthesizable HDL Cores 
20 Trafalgar Square        http://www.vautomation.com
Suite 443 Nashua NH 03063  (603) 882-2282 FAX:882-1587




Article: 2811
Subject: Re: 8274 Inside FPGA?
From: Applications Division <serial@singnet.com.sg>
Date: 12 Feb 1996 08:27:04 GMT
Links: << >>  << T >>  << A >>
Hi,

It is possible to implement a 8274 inside an ALTERA Flex-10K CPLD. Altera 
has tie-ups with other third-party model developers who supply 
synthesizable VHDL or Verilog code for various complex models. 

This is called the Altera Mega-functions Partnership Program (AMPP). Look 
up for AMPP on the web site : "//www.altera.com" or send an e-mail to 
Altera Applications at "sos@altera.com".

Kiran Vittal
Serial System, Singapore


prenato@trantor.it.pt (Paulo Oliveira) wrote:
>
>Hello !
>
>I'm developing a system in which I need a HDLC/LAPD controller to add 
>to my Intel 386 EX processor. Is there a chance to put the controller 
>inside a FPGA? Do you know a family of FPGAs whith such libraries? 
>
>Thank you in advance,
>
>Paulo Oliveira.




Article: 2812
Subject: Re: Chosing VHDL or Verilog Does Have An Impact For U.S. Engineers
From: jwill@netcom11.netcom.com (John Williams)
Date: Mon, 12 Feb 1996 09:36:09 GMT
Links: << >>  << T >>  << A >>
Here's my $0.02 worth on higher education, corporate short sightedness, and
the risks involved with being an engineer in today's economy.

First of all, I think the schools are very much out of touch with what the
industry is doing, and charge tremendous amounts of money for a very
mediocre education. There is practically no economic incentive for
corporate america to train professors on how work actually gets done.
I think the idea of having professors return to industry ( kind of like
a negative tenure ) is a great idea.

Second, I worked at a company that had an extensive training program. The
problem there was that it was for all their internal proprietary tools.
What company in its right mind is going to train their employees to be
more marketable? When I was laid off, I found myself in the position of
having no concrete marketable skills. I cheerfully and gratefully accepted
a lower position that included using industry standard tools. What did I do
to get out of this position? I trained myself. Every engineer should involve
themselves in self training, reading books, and learning new skills. I
don't bother with school, it's too slow, expensive, and irrelevent. Do the
job you're doing NOW to the best of your abilities.

Don't complain, because you certainly can't count on anyone else making
you a more attractive member of the engineering labor market.

Could the government help? I have serious reservations about the one
institution more arcane than the universities. I don't expect much, I
won't be disappointed.

						John Williams


Article: 2813
Subject: Re: FPGA entry for <$1000?
From: tartis@world.std.com (Tad B Artis)
Date: Mon, 12 Feb 1996 09:53:21 GMT
Links: << >>  << T >>  << A >>
Mike Diack (moby@kcbbs.gen.nz) wrote:
: Can anyone suggest any software vendor supplying entry level FPGA tools
: for less than $US1k ?. I know of the Xilinx $995 package, but it
: requires additional schematic entry ($$$$$) software. What's out there
: that wint reqiure me to mortgage the cat ?. SRAM based preferred.
: M

As I recall, Altera's s/w was about $.5 to 1.5k and had HDL capabilities which 
seem to cost much more normally.


Article: 2814
Subject: uC HDL models
From: Stephan Dubach <sdubach@ztl.ch>
Date: 12 Feb 1996 10:27:51 GMT
Links: << >>  << T >>  << A >>
We are investegating in HW and SW codesign for microcontroller
systems. System simulation is the key to go for.
I don't know the the current market of HDL models. Therefore 
I would like to post few question:

- which companies (big guys) are involved in the HDL model world market?
- what do they sell (behavioral, RTL or gate level models)?
- what models are available for standard uC and peripherials?
- what about the cost for a 68HC11 or a 80C51 model?
- how far do behavioral and synthesizable models match?
- for system simulation I only know of the way to link the
  executable code to a memory model. Are there other ways to speed
  things up?

- Where can I get more information on this topic?
Thanks for answering any of these questions

Stephan

------------------------------------------------------------------
Stephan Dubach                                      sdubach@ztl.ch
MicroSwiss



Article: 2815
Subject: Re: Help wanted: Addressing PCI memory-mapped device above 16mB
From: "Barry B. Brey" <bbrey@ee.net>
Date: Mon, 12 Feb 1996 07:54:44 -0500
Links: << >>  << T >>  << A >>
Gerard van Soest wrote:
> 
> We have just finished developing a PCI card which gets
> memory mapped in to memory-space above the 1mB limit
> imposed by real mode programming.
> 
> What I would like to know is if there is an easy way to
> address this memory ie. is there a compiler out there that will
> take care of protected mode for you, whether it is possible
> to use a memory manager etc. If all else fails, it looks
> like I will have to resort to asembly code, and unfortunately
> I just don't have the time to do this :(
> 
> Any help would be greatly appreciated. Please email replies
> to vansoegi@elec.canterbury.ac.nz.
> 
> Cheers G.You don't state which operating system...ie....DOS, WINDOWS 3.11, Windows95 
etc.  You could use VCPI to access this area of memory under DOS, its loaded 
by HIMEM.SYS and provides drives for accessing extended memory.  If you are 
using Windows 95 and shell to DOS you can also use DPMI which also provides 
driver for access to extended memory.

You might obtain my latest book which explains these drivers and also shows 
how to setup a XXXX.SYS driver for a device.
-- 
look at my webpage at:

http://users1.ee.net/brey/

Very truly yours,

Barry B. Brey
Professor Electronic Engineering Technology
DeVry Institute of Technology
1350 Alum Creek Drive
Columbus, Ohio 43209


Article: 2816
Subject: Re: Altera Simulation
From: Tony Clark <tonyc@perth.DIALix.oz.au>
Date: Mon, 12 Feb 1996 20:57:25 +0800
Links: << >>  << T >>  << A >>
On 10 Feb 1996, Neal Becker wrote:

> I am just trying simulation on Altera for the first time.  I want to
> initialize some internal registers.  Is Altera's simulator really not
> capable of performing this basic function?  I can't seem to find any
> way to do this!
> 
> 
You can do it, but it is painfull.  You have to get the burried node into 
the waveform editor and then define its value.  I find it easier to bring 
the signals out of the chip and do it that way.


Article: 2817
Subject: Re: Xilinx is NOT specified MINIMUM delay -- is it right??
From: Tony Clark <tonyc@perth.DIALix.oz.au>
Date: Mon, 12 Feb 1996 21:01:04 +0800
Links: << >>  << T >>  << A >>
On Sat, 10 Feb 1996, HIKIMA Toshio wrote:

> Hi FPGAers,
> 
> I am using Xilinx XC4013. By the databook it is not specified
> minimum delay.
> My rep said, " there is no specificaton about minimum delay". Is it right?
> 
> If true, how can I design DRAM I/F? DRAM's spec has many complex constraient
> so that it should be used minimum delay value.
> 
> Thanks,
> --
> T.Hikima
> 
> 
You really need to simulate after working out the delays due to routing 
etc.  I'm not a Xilinx user yet, but thats the impression I get from 
looking at the Databook...



Article: 2818
Subject: Re: Xilinx FPGA's with Mentor Tools?
From: John Murray <john.murray@gecm.com>
Date: 12 Feb 1996 17:41:09 GMT
Links: << >>  << T >>  << A >>
Lance,

We are using Mentor's Top down tools to target Xilinx devices using XACT and NeoCAD. 
We had to tolerate a litany of problems (whinge time!). Mentor and Xilinx do not 
integrate (ther barely support netlist passing).

1/ XACT did not install correctly on our HP-UX host
2/ Mentor creates ModelName_s; Xilinx does not like _s. We couldn't create a .xnf 
file for months
3/ Xilinx recommend XBlox. XBlocks models don't have timing characteristics.
4/ TimSim8 (backannotation of routed design timing to "Schematic") does not modify 
the source schematic (EDDM database) via a viewpoint. It generates another design 
database in a separate directory. This presents a rather serious configuration 
control problem.
5/ An EDIF read licence is required, at the unreasonable price of ~£2000.
6/ XNFBA still does not work.

Mentor tools work well together and deliver good results. Xilinx tools are good and 
deliver well. The integration, intertool communications and design control are 
absolutely lousy. It is particularly annoying given that Mentor's Framework 
Initiative promised to solve these problems! (Oh yes, its Xilinx's problem for not 
integrating......)
Neither Mentor nor Xilinx could provide us with an overall process diagram and 
process description.

We have, somehow, managed to knife and fork a usable process.

Solutions for these problems are, of course, just around the corner.







Article: 2819
Subject: Re: Help: Xilinx behavior if Power down
From: peter@xilinx.com (Peter Alfke)
Date: 12 Feb 1996 20:10:52 GMT
Links: << >>  << T >>  << A >>
In article <311BEA73.4AD7@mei.com>, Scott.Kroeger@mei.com wrote:

> Peter Wurbs wrote:
> > 
> > Hi Xilinx-Freaks,
> > 
> > I use an output of a XC3195A as a Chip-Select for a SRAM.
> > The CS-Signal has a pullup to a battery-buffered voltage to
> > maintain RAM data if the power is down. The FPGA is supplied
> > by the unbuffered power supply.
> > 
> >
> >                         | Vbatt (1.8V if power down)
> >                         |
> >                         |
> >                         |
> >                         -
> >                        | |
> >                        | |
> >                        |_|
> > |---------------|       |           |------|
> > |               |       |       CS  |      |
> > |   FPGA        |-------------------| RAM  |
> > |               |                   |      |
> > |---------------|                   |------|
> >        |                               |
> >        |                               |
> >       VCC                             Vbatt
> > 

The circuit, as shown, cannot work in your application.
There are two reasons why the output pin cannot be pulled substantially
higher than the supply voltage: The p-channel pull-up transistor used in
this complementary output acts as a clamping diode, plus there is a strong
diode deliberately built into the chip for Electro-Static Discharge (ESD)
purposes. Without such a diode, it would be very easy to ruprure the gate
oxide of the input buffer transistor.
At low Vcc ( below 3 V ) the XC3195A output driver will go high-impedance,
but there will still be two diodes: one prevents the pin from going more
positive than its Vcc pin, the other one prevents the pin from going more
negative than its ground pin. Of course, these are silicon diodes, so
there is hardly any current in them for the first 500 mV of
forward-biasing.

Here is the solution to your problem:
Stick an inverter directly in front of your CS input, and then use a
pull-down resistor instead of a pull-up, and of course reverse the logic
sense of the XC3195A output.
If you have any additional questions, send me e-mail.

Peter Alfke, Xilinx Applications, San Jose, CA
e-mail: peter@xilinx.com


Article: 2820
Subject: Re: Xilinx FPGA's with Mentor Tools?
From: vanbeek@students.uiuc.edu (Christopher VanBeek)
Date: 12 Feb 1996 20:35:26 GMT
Links: << >>  << T >>  << A >>
Hi,

I'm a college student at the University of Illinois and am just
learning to use VHDL. I have compiled and synthsized some simply
designs using Mentor's tools and Xilinx libraries. Here was
the design flow I used:

First, I had to create a work directory using Mentor's "qvlib"
program. Then compile the VHDL using "qvcom". Simulation can be
performed using "qvsim", or I think QuickSim (I have not tried
QuickSim). I found qvsim to be faster than QuickSim every was,
and it allows step tracing and breakpoints in the source as well
as waveforms. To synthesize, I compiled using "qvcom" with the
"-sythesis" option. This executes Mentor's System-1076 Compiler
after compiling the VHDL. That creates a symbol in the work
directory and the Autologic viewport for the design. Then I ran
Mentor's Autologic program, read in the viewport, and set the
destination technology. The Autologic libraries for the Xilinx
FPGAs are on supportnet.mentorg.com in the
/pub/mentortech/tdd/libraries/fpga directory. Then I set a couple
contraints and hit the synthesis button. It created a bunch of
sheets with Xilinx gates and flip flops on them.

Christopher Van Beek


Article: 2821
Subject: Re: FPGA entry for <$1000?
From: devb@lys.vnet.net (David Van den Bout)
Date: 12 Feb 1996 16:47:21 -0500
Links: << >>  << T >>  << A >>
In article <499637.30226.14415@kcbbs.gen.nz>,
Mike Diack <moby@kcbbs.gen.nz> wrote:
>Can anyone suggest any software vendor supplying entry level FPGA tools
>for less than $US1k ?. I know of the Xilinx $995 package, but it
>requires additional schematic entry ($$$$$) software. What's out there
>that wint reqiure me to mortgage the cat ?. SRAM based preferred.
>M


Mike:

XESS Corp still sells the epXkit for $159 (US).  It gives you a
3,500 gate EPX780 CPLD/FPGA along with the PLDSHELL software and
a textbook of experiments.  You can check http://www.xess.com for
more info or e-mail to fpga-info@xess.com.

-- 
|| Dave Van den Bout  --  XESS Corp. ||
|| 2608 Sweetgum Dr., Apex, NC 27502 ||
|| (919) 387-0076 FAX:(919) 387-1302 ||
|| devb@xess.com       devb@vnet.net ||


Article: 2822
Subject: Re: Xilinx is NOT specified MINIMUM delay -- is it right??
From: Andy Gulliver <andy.gulliver@crossprod.co.uk>
Date: Mon, 12 Feb 1996 15:51:41 -0800
Links: << >>  << T >>  << A >>
Because of the internal structure of the XILINX devices, the exact delays 
for any implementation will depend on the exact placement and routing of 
the logic.  This also means that re-compiling the same design will 
produce different timings!

The only accurate(-ish) way to check timings is to do a post-routing 
simulation - sadly not very easy for XILINX.

What most data will give is *maximum* timings (for *any* semiconductor 
device), so that you can work to worst-case timings.  Working to minimum 
timing spec.s is *not* a good idea - I've seen people get into problems 
doing just that.

-- 
Regards

AndyG

"Any opinions expressed herein are entirely my own and may or may not
have any connection with reality, virtual or otherwise."


Article: 2823
Subject: SIGDA UNIVERSITY BOOTH AT DAC-96, CALL FOR DESIGN DEMOS
From: Balakrishnan Iyer <biyer>
Date: 13 Feb 1996 02:02:44 GMT
Links: << >>  << T >>  << A >>
SIGDA UNIVERSITY BOOTH AT DAC-96

	CALL FOR DESIGN DEMOS

You may be aware of SIGDA's popular University Booth program at the
annual Design Automation Conference, where university researchers
demonstrate their CAD tools and present recent research results.
Following the DAC's mission of serving the design community,
SIGDA plans to expand the University Booth's activity by introducing
demonstrations of university-based design projects.

The DAC-96 University Booth will thus feature both the traditional CAD
demos, as well as demos of hardware projects (ranging from FPGA design
prototypes to multiple chip systems) emanating from universities.
This will give university-based designers and DAC-96 authors tremendous
visibility and the opportunity for valuable feedback from the
DAC-96 participants.

As this year's DAC University Booth coordinator, I invite you to
participate in the University design demo/poster session by demonstrating
your designs.

If you wish to participate in the DAC-96 SIGDA University Booth,
please inform me or one of the persons listed below by e-mail.
At a later date, I will send you more information on the logistics
for the booth.

I look forward to your response and hope that you will participate
to make the SIGDA University Booth at DAC-96 a big success!

Ramesh Karri
DAC-96 University Booth Coordinator
karri@india.ecs.umass.edu
Phone: (413)-545-2725
FAX: (413)-545-1993

Balakrishnan Iyer
biyer@ecs.umass.edu
Phone: (413)-545-0715

Aurobindo Dasgupta
dasgupta@ecs.umass.edu
Phone: (413)-545-0831

Kyosun Kim
kkim@ecs.umass.edu
Phone: (413)-545-0831

-- 
Balakrishnan Iyer

Dept. of Electrical & Computer Engineering              Ph.: (413)-545-0715
University of Massachusetts at Amherst              Office : KEB 310
Marcus Hall
Amherst, MA - 01003.
____________________________________________________________________________



Article: 2824
Subject: Re: Looking for OPAL, PALASM, PLAN
From: moby@kcbbs.gen.nz (Mike Diack)
Date: 13 Feb 96 06:39:16 GMT
Links: << >>  << T >>  << A >>
In message <<823892654snz@fpga.demon.co.uk>> David Pashley <david@fpga.demon.co.uk> writes:
> 
> Since AMD stopped developing their own software, PALASM has been
> superceded by MACH-XL3, which is a very good tool, being based on 
> MINC's PLDesigner-XL. I'm not sure how much AMD charge for MACH-XL3, 
> but I imagine it would be good value for a beginner.
>  
There is a version of MACHXL on the Elrad FTP site in the 060 directory
at ftp.ix.de (source of much amazing stuff) and you're right - its a
bloody good piece of software. Machs are great chips too.
M




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search