Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 51650

Article: 51650
Subject: Re: Lecroy Research Systems - what happened?
From: Eric Inazaki <einazaki@mac.com>
Date: Fri, 17 Jan 2003 16:55:18 -0600
Links: << >>  << T >>  << A >>
On Fri, 17 Jan 2003 14:13:06 -0800, John Larkin
<jjlarkin@highSNIPlandTHIStechPLEASEnology.com> wrote:

>On Fri, 17 Jan 2003 13:41:53 -0500, Mu Young  Lee <muyoung@umich.edu>
>wrote:
>
>>Is there a company out there that makes anything close to what LeCroy
>>Research Systems used to make?  Anyone know of people who used design the
>>boards at LRS?
>>
>>Mu Young Lee
>>Thousand Oaks, CA
>
>
>My company makes some VME and CAMAC modules that do some of the
>high-speed timing stuff that LeCroy used to do. We don't do high
>voltage or FastBus, though.
>
>http://www.highlandtechnology.com/
>
>Also check out Phillips Scientific, Jorger, and Jorway. See vita.org
>if you're interested in VME.
>
>What sort of stuff are you working on?
>
>John

Also maybe KineticSystems (www.kscorp.com) Bi-Ra (www.bira.com),
BNC (www.berkeleynucleonics.com) and one of those EG&G
companies, I think it was Ortec.

Was kind of a bummer when LeCroy folded up their research instruments
division.

Article: 51651
Subject: Re: XST vs Synplify observations
From: "Austin Franklin" <austin@da98rkroom.com>
Date: Fri, 17 Jan 2003 18:07:45 -0500
Links: << >>  << T >>  << A >>
Might want to be careful here.  Read your Synplicity license VERY carefully.
If I remember right, you might not be allowed to "discuss" benchmark
results, which would include saying "this is better than that" ;-)

I really dislike software license agreements...they just come up with the
damndest things...

"Roger Green" <rgreen@bfsystems.com> wrote in message
news:4A_V9.39867$NV.820849@news.direcpc.com...
> I have recently been driven to switch to Synplify synthesis, from XST, for
a
> fairly complex PCI core VHDL design targeting a Virtex 300 in order to
work
> around various ISE tool bugs (which is another story all together).  After
a
> fairly painful conversion of constraints (for both timespecs and previous
> floorplannning) and getting the design to actually compile again, I have
> noted that the logic paths now failing to meet my timespecs are in logic
> that previously was not a problem with XST synthesis, although there were
> problems paths then also.   At first glance, my "gut" says that there are
> more logic levels (LUTs) being used in the problematic paths, but I don't
> have a direct proof of that, since I never actually looked at these
specific
> paths with the XST runs.  I am using the identical "max" effort settings
for
> PAR settings as before the synthesis tool switch.
>
> I was wondering if any of the experts here have had any similar or
different
> experiences regarding the "performance" of these two tools on the same
> source design.  Or perhaps someone could point me to some links to any
> "objective" comparison of these tools and/or information regarding what
> types of logic each is better/worse at optimizing?
>
> --
> Roger Green
> B F Systems - Electronic Design Consultants
> www.bfsystems.com
>
>



Article: 51652
Subject: Re: Schematic design approach compared to VHDL entry approach
From: "Austin Franklin" <austin@da98rkroom.com>
Date: Fri, 17 Jan 2003 18:09:27 -0500
Links: << >>  << T >>  << A >>

"Eric Smith" <eric-no-spam-for-me@brouhaha.com> wrote in message
news:qh1y3bh3rz.fsf@ruckus.brouhaha.com...
> I wrote:
> > Have you seen such a manual for a C compiler?
>
> "Austin Franklin" <austin@da98rkroom.com> wrote:
> > Er, yes.
>
> I wrote:
> > Where?
>
> "Austin Franklin" <austin@da98rkroom.com> wrote:
> > On my book shelf.
>
> Well, don't keep us in suspense.  Tell us what manual it is.  I'd like
> to see for myself this manual that allows one to predict what code will
> be generated from an arbitrary C function.

It's not arbitrary, it's entirely deterministic.  That's the POINT!

Will you pay me by the hour to find it for you, then scan it in, page by
page....so you can see it?

Austin



Article: 51653
Subject: Re: Lecroy Research Systems - what happened?
From: Mu Young Lee <muyoung@umich.edu>
Date: Fri, 17 Jan 2003 18:17:32 -0500
Links: << >>  << T >>  << A >>
On Fri, 17 Jan 2003, John Larkin wrote:
>
> http://www.highlandtechnology.com/
>
> What sort of stuff are you working on?

I was specifically interested in their fast programmable logic CAMAC
modules which employed Xilinx FPGAs.  I am not tied to CAMAC however.  If
there is a more modern off-the-shelf equivalent that would do as well.

Mu


Article: 51654
Subject: Re: Booting Spartan IIE from SPI
From: Peter Alfke <peter@xilinx.com>
Date: Fri, 17 Jan 2003 15:37:43 -0800
Links: << >>  << T >>  << A >>
This should be easy.
The data sheets and app notes describe the serial configuration data
format.
Then you just select serial slave mode and feed the bits in appropriately.
The tolerances are quite wide. (Frequency 0...tens of  MHz).

Don't put the burden on us, whether we support any one out of many obscure
standard. We tell you the signal we can accept, and then you can figure out
whether there is a match. Otherwise we can argue in circles forever...

Peter Alfke
=============================
Peter Wallace wrote:

> On Fri, 17 Jan 2003 11:47:48 -0800, Michael Wilspang wrote:
>
> > Hi Falk and Patrick
> >
> > Falk; have you tried this interface solution ?
> >
> > But Patricks experience  says something else !!!
> >
> > I'm confused!
> > --
> >
> > /Michael Wilspang
>
> We've done it with a PIC and it works fine.
>
> The configuration interface is not SPI, but it is a simple synchronous
> serial interface, the PIC has no trouble configuring (at least SpartanII
> chips)...
>
> Peter Wallace


Article: 51655
Subject: Re: Multiple FPGA-boards integration issues
From: prashantj@usa.net (Prashant)
Date: 17 Jan 2003 15:39:48 -0800
Links: << >>  << T >>  << A >>
Gotcha !! Will read those apps. It builds your confidence to know that
its a common problem and there are solutions available.

Thanks everyone,
Prashant

spam_hater_7@email.com (Spam Hater 7) wrote in message news:<e9486eb9.0301171037.208d6535@posting.google.com>...
> Prashant,
> 
> What everyone is trying to tell you: Yes, everything you want to do is
> possible, BUT...
> 
> You are going to need to treat the devices like they all have
> different clocks.
> 
> The fact that they all run at 40MHz is irrevelant.  The internal
> differences can not be ignored.
> 
> Do a web search on transferring data between different clock domains. 
> (Xilinx has a couple of app notes on their web site.  IIRC, Peter
> wrote them, and they're good.)
> 
> It's a common problem that everyone has to face.
> 
> SH
> 
> prashantj@usa.net (Prashant) wrote in message news:<ea62e09.0301170743.1b3a638e@posting.google.com>...
> > > You get rid of long-term differences, yes.  But you still have to employ
> > > interface protocols to transfer data from one FPGA to the other.
> > > 
> > > Marc
> > 
> > I think it is these interface protocols that I'm trying to get an idea
> > about (And I still dont have a lot). I also realized that my FPGA
> > board does not have a clk out (unlike my mention of it in the 1st
> > posting). Which means that while the board can accept external clocks,
> > I dont see a way to send a clock out from the board. Any ideas how
> > that can be achieved.
> > 
> > Thanks,
> > Prashant

Article: 51656
Subject: Re: quality of software tools in general
From: Ray Andraka <ray@andraka.com>
Date: Sat, 18 Jan 2003 00:49:05 GMT
Links: << >>  << T >>  << A >>
I'm afraid perhaps I read too much into your post and the email you sent me
then, as I was under the impression you were looking to build the processing
portion of a supercomputer with FPGAs (which as I pointed out has been done
for specific problems).

A 64 bit floating point vector pipeline unit is not out of the question for
FPGAs, but for performance you'll need to be careful about how it is
implemented.  It is certainly not a trivial design, but if it is to be used
on something that is too compute intesive to be economically done with cpus,
or if the design is to be general enough to address a class of problems
instead of a single problem (ie. getting reuse out of the design), then the
cost of development may be justifiable.  Don't expect tools like handel-C
driven by people who are not already hardware, and preferably FPGA, design
experts to turn out anything that will make the effort worthwhile compared
to just using the CPUs.

Matt wrote:

> Ray Andraka wrote:
> >
> > The FPGA gains its advantage by creating a circuit optimized to the
> > particular task, not by replicating CPUs.  For example, in DNA pattern
>
> Maybe it wasn't clear but I'm not interested in replacing CPUs, only the
> hardware used for inter-processor communication.  The only way I could
> see replacing the CPU is if something like a vector unit could be built
> that would be substantially faster than a CPU. A 64 bit wide floating
> point vector unit sounds beyond current capabilities. Maybe if a 1 bit
> wide floating point alu could be built then 50 of these could be put on
> a chip. Tie that to dual ported  memory so the reconfigurable part could
> read and write vector registers  while the vector unit operates and that
> would be interesting.  A nearby cpu could do scalar ops, set up the
> communications part of the fpga, and issue vector instructions.
>
> Matt

--
--Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email ray@andraka.com
http://www.andraka.com

 "They that give up essential liberty to obtain a little
  temporary safety deserve neither liberty nor safety."
                                          -Benjamin Franklin, 1759



Article: 51657
Subject: Re: Lecroy Research Systems - what happened?
From: John Larkin <jjlarkin@highSNIPlandTHIStechPLEASEnology.com>
Date: Fri, 17 Jan 2003 16:52:55 -0800
Links: << >>  << T >>  << A >>
On Fri, 17 Jan 2003 16:55:18 -0600, Eric Inazaki <einazaki@mac.com>
wrote:


>Was kind of a bummer when LeCroy folded up their research instruments
>division.

Not for me!

John


Article: 51658
Subject: Re: Schematic design approach compared to VHDL entry approach
From: Ray Andraka <ray@andraka.com>
Date: Sat, 18 Jan 2003 00:53:34 GMT
Links: << >>  << T >>  << A >>
Austin,

I think just a reference would be sufficient.  You did say it is on your
bookshelf, I'm sure you have at least an idea where on the bookshelf it is.
Couldn't you just get us the title, author(s), publisher,  date (and on the
outside chance it has one, the isbn), and if you ar efeeling generous the
pertinent page numbers?



Austin Franklin wrote:

> "Eric Smith" <eric-no-spam-for-me@brouhaha.com> wrote in message
> news:qh1y3bh3rz.fsf@ruckus.brouhaha.com...
> > I wrote:
> > > Have you seen such a manual for a C compiler?
> >
> > "Austin Franklin" <austin@da98rkroom.com> wrote:
> > > Er, yes.
> >
> > I wrote:
> > > Where?
> >
> > "Austin Franklin" <austin@da98rkroom.com> wrote:
> > > On my book shelf.
> >
> > Well, don't keep us in suspense.  Tell us what manual it is.  I'd like
> > to see for myself this manual that allows one to predict what code will
> > be generated from an arbitrary C function.
>
> It's not arbitrary, it's entirely deterministic.  That's the POINT!
>
> Will you pay me by the hour to find it for you, then scan it in, page by
> page....so you can see it?
>
> Austin

--
--Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email ray@andraka.com
http://www.andraka.com

 "They that give up essential liberty to obtain a little
  temporary safety deserve neither liberty nor safety."
                                          -Benjamin Franklin, 1759



Article: 51659
Subject: Re: Lecroy Research Systems - what happened?
From: Tullio Grassi <tullio@umd.edu>
Date: Fri, 17 Jan 2003 21:35:30 -0500
Links: << >>  << T >>  << A >>
Mu Young Lee wrote:
> Is there a company out there that makes anything close to what LeCroy
> Research Systems used to make?  Anyone know of people who used design the
> boards at LRS?
> 
> Mu Young Lee
> Thousand Oaks, CA
> 

I think this company is continuing some of the
  old LeCroy products:

   http://www.datadesigncorp.net/

-- 

Tullio Grassi

======================================
Univ. of Maryland - Dept. of Physics
College Park, MD 20742 - US
Tel +1 301 405 5970
Fax +1 301 699 9195
======================================


Article: 51660
Subject: Re: Student development board
From: wv9557@yahoo.com (Will)
Date: 17 Jan 2003 18:49:19 -0800
Links: << >>  << T >>  << A >>
"geeko" <jibin@ushustech.com> wrote in message news:<b088sm$mhcs0$1@ID-159027.news.dfncis.de>...
> There is nothing at the link   http://www.geocities.com/wv9557   only a link
> to the the board's diagram .Please give the design details
> 

Well, I do not recommend prototyping with the XC4000 series as they are
old parts. The other thing is that you need Foundation 2.i to make
bitstreams for the XC4000, and Foundation is not free. If you are 
still curious about the wiring, then drop me a note. Actually it's pretty
simple: just wire all the VCC pins, all the GND pins, the JTAG pins,
provide a clock, then you are in business.

Article: 51661
Subject: Re: Schematic design approach compared to VHDL entry approach
From: Eric Smith <eric-no-spam-for-me@brouhaha.com>
Date: 17 Jan 2003 19:03:00 -0800
Links: << >>  << T >>  << A >>
I wrote:
> Have you seen such a manual for a C compiler?

"Austin Franklin" <austin@da98rkroom.com> wrote:
> Er, yes.

I wrote:
> Where?

"Austin Franklin" <austin@da98rkroom.com> wrote:
> On my book shelf.

I wrote:
> Well, don't keep us in suspense.  Tell us what manual it is.  I'd like
> to see for myself this manual that allows one to predict what code will
> be generated from an arbitrary C function.

"Austin Franklin" wrote:
> It's not arbitrary, it's entirely deterministic.  That's the POINT!

No, the point was that I should be able to take any arbitrary C code
of my choosing, and use that book to determine what code the compiler
will generate.  This seems analagous to your claims that a logic synthesis
tool manual should let you figure out what your Verilog code will
compile to.

I am assuming (perhaps foolishly) that this book you are claiming to
exist isn't just a printed listing of the C compiler source code, or
object code.  Technically such a listing printed as a book would, with
enough study, reveal the information you want, but it would in practice
be useful to no one but the compiler developers and maintainers.

> Will you pay me by the hour to find it for you, then scan it in, page by
> page....so you can see it?

Yes.  If the book actually can be used for the purpose you claim, I will
pay you by the hour to find it and provide a reasonable citation whereby
other people (such as myself) can look it up.  No scanning should be
necessary.

Unless, of course, the book isn't something publicly available, in which
case it doesn't serve as a good example of what you are claiming logic
synthesis tool manuals should provide.

Article: 51662
(removed)


Article: 51663
Subject: Re: XST vs Synplify observations
From: Roger Green <rgreen@bfsystems.com>
Date: Fri, 17 Jan 2003 21:15:06 -0700
Links: << >>  << T >>  << A >>
Hi Mike,

I was actually referring to the post-route timing analysis in both
cases. Since the 
source design is the same and the Xilinx PAR settings are identical, I'm
led to 
believe that the difference is primarily due to the synthesis tool
used.  Although
the routed result is never identical between PAR runs, usually the
problematic
paths remain the same.  My main query is after spending considerable
time
optimizing and floorplanning a design, then switching synthesis tools,
it appears
that I have an entirely "different" set of design paths to work on now.

And BTW the critical timing paths are, and continue to be, synchronous
between register
combinational logic paths - - just different ones.

-- Roger


Mike Treseler wrote:
> 
> Maybe you are comparing XST post-route results to
> Synplify pre-route timing estimates. Place and route
> your Synplify netlist before making timing comparisons.
> 
> Synthesis is optimized for synchronous inputs
> and registered outputs. The closer you can
> be to this standard template the better.
> 
>   -- Mike Treseler

Article: 51664
Subject: Re: XST vs Synplify observations
From: Roger Green <rgreen@bfsystems.com>
Date: Fri, 17 Jan 2003 21:45:29 -0700
Links: << >>  << T >>  << A >>
Kevin,

Thanks for your detailed insights.  The IP core I'm using is from Xilinx
and, although I am resynthesizing it with the rest of the design, it is
essentially a "black box" of pre-compiled structured netlist which is 
simply instantiated along with my considerable "back-end" vhdl design.
The design paths which are failing timespecs are not part of the IP
core, but are in my custom part of the hierarchy <g> which is attempting
to run at much faster clock speeds than the 33MHz PCI interface in this
case.

The design was originally developed with ISE 4.2, then "updated" to 5.1
to
overcome some floorplanner bugs. Unfortunately, the 5.1 version of XST
would
no longer work (fatal error crashes) on the same design so, as a
workaround
I switched to Synplify to continue the "final" optimization efforts on 
performance goals.

Since the different synthesis tool seems to have caused entirely
different 
logic paths to have become the slower ones at post-route, I'm feeling
the 
need to ask why, and would like to understand which synthesis tools are 
might be better suited to different designs.  After all, how many ways
can you reduce/optimize the combinational logic of a synchronous design
into
4-input LUTs?  About as many ways as there are synthesis tools, I'm
sure.

-- Roger

Article: 51665
Subject: Re: quality of software tools in general
From: johnjakson@yahoo.com (john jakson)
Date: 17 Jan 2003 20:50:02 -0800
Links: << >>  << T >>  << A >>
Matt <matt@NOSPAMpeakfive.com> wrote in message news:<3E2840CE.C588725C@NOSPAMpeakfive.com>...
> john jakson wrote:
> [a great description of working with ffts in hw, thanks]
> 

Thanks

> 
> Occam isn't used in supercomputing.  It's all C and Fortran. The reason
> is the people that write the codes do science first and software second.
> 

Yea, I know. What I am suggesting is that Occam/HandelC is a bridge
between the supercomp F/C way of thinking and the HW way of thinking.
If you understand how Occam plays in distributed computing over
Transputer arrays as was done 15yrs ago, you are half way to
understanding how to do the same in HW. The higher level Par blocks
correspond to multiple parallel instances of HW and the communication
between the Par processes using messages, channels & links,
corresponds to wires & busses. In fact most of the Transputer crowd of
companies seem to be alive and well in FPGAs & DSPs, same problems,
more speed. Some of the Oxford/Occam people became Celoxica and
figured how to map Seq into time shared resources which is behavioural
synthesis.

SW engineers find Seq easy & Par hard, ie Thread safety issues
HW engineers find Par easy & Seq hard, ie buggy state machines


> 
>     do j=1,n
>        do i=1,n
>           a(i,j) = k1*b(i+1,j) + k2*b(i-1,j)
>        enddo
>     enddo
> 

This sort of computation isn't even remotely difficult or interesting
for ASIC/FPGAs.

For a any computer, throughput is figured by dividing total no of
math/move operators required into total capability of machine
available, this usually gives a good best case upper bound. Then start
dividing by a fudge factor for unforseens. I usually don't see compute
times ever given for SW since SW engineers either don't care or don't
know, x86s are now basically unpredictable (1 op per Hz is my rule).
Embedded & DSP SW guys do generally know because the cpu/DSPs are more
predictable & slower and the performance is part of the spec. Cycle
counting seems to be alien to SW guys, but all HW design requires
detailed cycle planning of all math operations & mem rd/wr.

In HW, same rule, except you get to determine whether real estate
gates/luts are used for muls (about 2000 gates) or adders, muxes, mems
etc. In modern FPGAs the 18.18 muls & memories are available in the
fabric. For this example you could use 2 muls, an adder, 2 mem blocks
1 dual ported b[] and one single ported a[] and a state machine. But
the b[] refs can be a single mem rd so now only a single dual port or
2 single port mems needed. The i+1 & i-1 would also be combined with
the loop counter and delay registers. Compute time is now n*n clocks
which could be upto 120MHz or more. Things can be speeded up more by
replicating the engine m times, but you are limited to 2 mem accesses
per cycle per block ram. Speed ups can also be done by further
pipelining the *+ operation by another 2x. If in fact your were
filtering an image say with all 9,25,49 elements around i,j, then you
really need to only increase the depths of delay buffers and add more
muls & adds to do the whole dot product every cycle. The additions
would be done in pipeline cycles after the muls, so you are really
only limited by the single worst case path delay, usually the muls or
big adds. Typically the a[] result will be going to another operation,
in which case the mem write limit could be removed and the next op
placed in the pipeline sequence allowing m to >1. In many DSP
problems, the * can be replaced by a few +s in canonic signed math.
When +s are the critical delay, these can be super pipelined by
breaking into smaller + pieces or using csa/wallace arrays so the
clock speed can go way up.

So FPGAs cycle about 10-20x slower than P4s, but you can plan on >100
*+ [] & thousands of other small ops per cycle for a huge improvement
over x86 & you can still throw in 1 or more cpus, hard or soft.

Think of an ASIC like a Jpeg2000 codec. You can buy it as a chip
(soon) & it can do the job in x us. You can also NOT buy the C/Verilog
code that precisely models the behaviour of the chip. The C/Verilog
code simulates the chip at maybe millions of times slower than the
ASIC. The ASIC can be seen as an accelerated version of that private C
code, ie millions faster. Now the FPGAs can do same as ASICs, but are
generally say 10x slower & maybe 10x more $. The C code that models
these ASICs/FPGAs isn't even useful to an end user but the std C code
for Jpeg2000 is available, open & useful & say 10x faster than the
ASIC/FPGA C model. Net result is that the useful C code is still 10k
slower than an FPGA. Ok so these nos need adjusting.

Ultimately, it has nothing to do with quality of SW tools, only
breadth & quality of HW-SW design knowledge, and very little of that
design know how exists in any SW tools. The tools are good enough for
most HW people to do what they want.



Ray wrote
Regarding the FFT, Cooley Tukey is more or less a software algorithm. 
There
are other factorizations that significantly reduce the number of 
multiplies.  For example, a radix 16 Winograd FFT only requires 14 

Winograd was on tip of my tongue hence the Blahut ref. But not many
folks venture into these books & turn to it to HW since muls on a cpu
get replaced by adds & moves might still not win out. Was great when
muls were many cycles though.

Acording to Blahut, he gives 10(18) muls & 76 adds. Would this op be
exactly equiv to say a general rad16 composed of say 8 rad4s or 32
rad2s with slight diff errors, and can this block be used in any rad16
stage but with diff coeffs?

I think the big reason the exotics don't get used is that Matlab users
and most FFT libs use rad2 ct. If I had to use really fast SW FFT, I'd
probably just use Intels MMX/SSE tuned asm code.

JJ

Article: 51666
Subject: Re: quality of software tools in general
From: johnjakson@yahoo.com (john jakson)
Date: 17 Jan 2003 21:01:10 -0800
Links: << >>  << T >>  << A >>
Matt <matt@NOSPAMpeakfive.com> wrote in message news:<3E2879E3.3EE8FEC1@NOSPAMpeakfive.com>...
> Ray Andraka wrote:
> > 
> > The FPGA gains its advantage by creating a circuit optimized to the
> > particular task, not by replicating CPUs.  For example, in DNA pattern
> 
> Maybe it wasn't clear but I'm not interested in replacing CPUs, only the
> hardware used for inter-processor communication.  The only way I could
> see replacing the CPU is if something like a vector unit could be built
> that would be substantially faster than a CPU. A 64 bit wide floating
> point vector unit sounds beyond current capabilities. Maybe if a 1 bit
> wide floating point alu could be built then 50 of these could be put on
> a chip. Tie that to dual ported  memory so the reconfigurable part could
> read and write vector registers  while the vector unit operates and that
> would be interesting.  A nearby cpu could do scalar ops, set up the
> communications part of the fpga, and issue vector instructions.
> 
> Matt


Actually replacing the cpus is precisely what FPGAs are good for when
10% of code keep cpus busy 90%. It would seem a strange thing to let
cpus compute & route through FPGA hw.

So now you are doing FP, whats the application?

Article: 51667
Subject: Re: XST vs Synplify observations
From: "Matt" <bielstein2002@attbi.com>
Date: Sat, 18 Jan 2003 07:31:01 GMT
Links: << >>  << T >>  << A >>
It seems to me that you might consider talking directly to Synplicity if you
haven't already done so. They actually care and maybe they could offer some
suggestions you may not have thought of. IMHO

Matt



"Roger Green" <rgreen@bfsystems.com> wrote in message
news:3E28DBE9.1BF8A644@bfsystems.com...
> Kevin,
>
> Thanks for your detailed insights.  The IP core I'm using is from Xilinx
> and, although I am resynthesizing it with the rest of the design, it is
> essentially a "black box" of pre-compiled structured netlist which is
> simply instantiated along with my considerable "back-end" vhdl design.
> The design paths which are failing timespecs are not part of the IP
> core, but are in my custom part of the hierarchy <g> which is attempting
> to run at much faster clock speeds than the 33MHz PCI interface in this
> case.
>
> The design was originally developed with ISE 4.2, then "updated" to 5.1
> to
> overcome some floorplanner bugs. Unfortunately, the 5.1 version of XST
> would
> no longer work (fatal error crashes) on the same design so, as a
> workaround
> I switched to Synplify to continue the "final" optimization efforts on
> performance goals.
>
> Since the different synthesis tool seems to have caused entirely
> different
> logic paths to have become the slower ones at post-route, I'm feeling
> the
> need to ask why, and would like to understand which synthesis tools are
> might be better suited to different designs.  After all, how many ways
> can you reduce/optimize the combinational logic of a synchronous design
> into
> 4-input LUTs?  About as many ways as there are synthesis tools, I'm
> sure.
>
> -- Roger



Article: 51668
Subject: Re: quality of software tools in general
From: edmurkin@yahoo.co.uk (Ed)
Date: 18 Jan 2003 04:02:39 -0800
Links: << >>  << T >>  << A >>
Matt <matt@NOSPAMpeakfive.com> wrote in message news:<3E2840CE.C588725C@NOSPAMpeakfive.com>...
> john jakson wrote:
> [a great description of working with ffts in hw, thanks]
> 
> > If you try to
> > use HandelC to do real hw design at the level of detail hw guys would
> > get into, I'd say you beating a dead horse.
> 
> I'll take your word for it, as that's kind of what I expected

I'd disagree.  The difference in approach I believe with a language
such as Handel-C or VHDL, is that with VHDL you describe the hardware
you want to build, with Handel-C you describe the problem you want to
solve, ie. the algorithm.

We have evaluated Handel-C and the results are impressive.  It's not a
replacement for VHDL but for our work with Virtex II Pro and Excalibur
has shown the value of Handel-C.  It can partition designs with a
technology called DSM and the EDIF output produces good QoR.  We also
output Verilog and VHDL from the Handel-C compiler and ran this
through Synplify with a direct link from Handel-C to Synplify.

There are simple optimization techniques  
> 
> > Seeing your web site, if you are familiar with Occam & its view of
> > Processes, then you will know exactly where HandelC is coming from,
> > since its is just the C version of Occam and Occam is basically a HDL
> > simulator.
> 
> Occam isn't used in supercomputing.  It's all C and Fortran. The reason
> is the people that write the codes do science first and software second.
> 
> > If you have a more interesting app than FFT, lets us know.
> 
> I've been staying away from the app because it's fuzzy but this is it:
> Someone writes a fortran/C program for use on a single processor.  They
> then add directives on how to parallelize it. It's important to realize
> that the person that wrote this program barely wants to mess with
> parallelism, not to mention hardware design. Now, the whole game with
> parallel computing is to keep the alus busy. The alus are plenty fast
> but getting data to and from them is difficult. It's worse when you tie
> 1000 processors together. Mostly, if people get 15% of peak speed
> they're happy. What's worse is the contortions that people have to go
> through to make use of the hardware. It's said that software lags
> hardware but I believe that's because the hardware is such a pain to use
> efficiently. It would be much easier for people to use parallel
> computers if they weren't so difficult to squeeze work out of.
> 
> What I'm wondering is rather than use a one size fits all communication
> interconnect it wouldn't be better to use an fpga fabric (with lots of
> cpus embedded) that could be specialized for each program. So if a chunk
> of code includes something along the lines of
> 
>     do j=1,n
> 	do i=1,n
> 	   a(i,j) = k1*b(i+1,j) + k2*b(i-1,j)
> 	enddo
>     enddo
> 
> and this runs across 1000 processors then the fpga would contain
> circuitry to handle communication and synchronization for elements not
> on the local processor. There are lots of ways to handle this depending
> on the context of the loop nest and a few other things. It would be best
> if the values of B needed on different processors were sent as soon as
> they were generated. So the cpu might write the value and address to a
> port in the fpga, which would in turn send a message to the fpga on the
> correct node, which would would hold it in some sort of cache like
> memory until the local processor asked for it. If it could figure out
> how to write it to main memory that would be better. But it could be
> that this loop nest is rewritten to first send a fetch request of all
> the neighbor's b values needed on this processor, execute the internal
> loops, wait on the fetch request, and then do the edge elements. 
> 
> Different circuitry is needed to handle global reductions (sum, max,
> etc), fetch requests where the processor isn't known until run-time,
> support for indirect addressing (a[i] = k*b[x[i]]), and lots of other
> scenarios. 
> 
> There are conferences on reconfigurable computing so I assume people are
> working on this sort of thing. There's even a machine, the SRC-6, but I
> haven't heard from them in awhile so I assume it's not so simple. But
> I'd like to know what the problems are. 
> 
> The hardware sounds reasonable: Lots of fpgas around a cpu and memory
> with some inbetween. The compiler that takes fortran and decides what
> communication should be put in hardware doesn't sound too bad. That
> communication has to be translated to verilog. If the scope of the
> problem is limited to reading and writing memory, sending and receiving
> packets, adding internal cache like objects, and synchronization, would
> this be difficult?
> 
> Finally comes the rest of the compilation to fpga. Can this all be
> easily handled by current tools? It sounds like people still get
> involved with floor planning (I don't know about routing).
> 
> Hope this helps.
> 
> Matt

Article: 51669
Subject: Re: Booting Spartan IIE from SPI
From: "Falk Brunner" <Falk.Brunner@gmx.de>
Date: Sat, 18 Jan 2003 14:03:02 +0100
Links: << >>  << T >>  << A >>
"Michael Wilspang" <michael@wilspang.dk> schrieb im Newsbeitrag
news:3e285e46$0$71718$edfadb0f@dread11.news.tele.dk...
> Hi Falk and Patrick
>
> Falk; have you tried this interface solution ?

Yes. It works. What the problem? Just put the databits (byte) to Din, raise
the CCLK, do this for all config-bytes. Add another dummy byte for the
startup-sequence. Finished. Yes, I2C MIGHT not work (havnt tried it, also
Iam not so familiar with I2C), since the protocoll is different to SPI, but
SPI DOES work. Its just a simple 8 bit paralle/serial converter.

--
MfG
Falk





Article: 51670
Subject: Re: Schematic design approach compared to VHDL entry approach
From: "Austin Franklin" <austin@da98rkroom.com>
Date: Sat, 18 Jan 2003 10:25:03 -0500
Links: << >>  << T >>  << A >>
> > It's not arbitrary, it's entirely deterministic.  That's the POINT!
>
> No, the point was that I should be able to take any arbitrary C code
> of my choosing, and use that book to determine what code the compiler
> will generate.

Er, Eric, that IS what deterministic means.  Same input always generates
same output.

> This seems analagous to your claims that a logic synthesis
> tool manual should let you figure out what your Verilog code will
> compile to.

Same thing.

> I am assuming (perhaps foolishly) that this book you are claiming to
> exist isn't just a printed listing of the C compiler source code, or
> object code.  Technically such a listing printed as a book would, with
> enough study, reveal the information you want, but it would in practice
> be useful to no one but the compiler developers and maintainers.

It wasn't hardbound.  It was a typical "manual", just like you get with most
C compilers, like the "User's Guide".  If I remember right, it was three
ring punched, and was simply printed out pages in a three ring notebook.  It
was intentionally written as a "manual/guide", it wasn't just a printout of
some arbitrary code.  It did contain code examples, and associated assembly
output...of course.  Specifically, for the 68k.  It was for an embedded OS,
either OS9 or VRTX?  What ever the company was that would give you a new VW
if you found a bug in their OS...  We NEEDED it and used it frequently
instead of playing games with how to write our code, we could reference
this, and see what the code would give us.  This saved us a lot of time
trying to argue with the tools to get it to make good code.

I'd go as far to say, that this was a very good thing back then, but today,
C compilers are probably a LOT more efficient...and processors faster, and
memory cheaper than back then (early 80's), so I don't believe it would be
near as useful as it was back then.

> > Will you pay me by the hour to find it for you, then scan it in, page by
> > page....so you can see it?
>
> Yes.  If the book actually can be used for the purpose you claim, I will
> pay you by the hour to find it and provide a reasonable citation whereby
> other people (such as myself) can look it up.  No scanning should be
> necessary.
>
> Unless, of course, the book isn't something publicly available, in which
> case it doesn't serve as a good example of what you are claiming logic
> synthesis tool manuals should provide.

I doubt it is "publicly available" except perhaps on eBay ;-), as the
company is probably long out of business, and it was only given to people
who bought THAT compiler, as it would obviously be different for every
compiler.  I will look for it when I move into my new office.  If I find it,
I will let you know.

BTW, how do you think the writers of compilers decide what the output should
be, and then check it?  I mean, someone has to write some kind of spec for
how the compiler should work.  This should apply to synthesis as well.

Austin



Article: 51671
Subject: Re: Schematic design approach compared to VHDL entry approach
From: "Austin Franklin" <austin@da98rkroom.com>
Date: Sat, 18 Jan 2003 10:28:03 -0500
Links: << >>  << T >>  << A >>

"Ray Andraka" <ray@andraka.com> wrote in message
news:3E28A634.67ACE821@andraka.com...
> Austin,
>
> I think just a reference would be sufficient.  You did say it is on your
> bookshelf, I'm sure you have at least an idea where on the bookshelf it
is.
> Couldn't you just get us the title, author(s), publisher,  date (and on
the
> outside chance it has one, the isbn), and if you ar efeeling generous the
> pertinent page numbers?

Ray,

I don't believe it was "titled" in the sense you mean, nor was it
"authored".  It was like the manual you get with Synplify...it wasn't a
published book.  As it is SPECIFIC for every complier (and probably every
major revision), there would be no reason for it to be a generally published
book, of course.

BTW, when I move into my new office, (and you are more than welcome to come
up and help ;-), I'll keep it in mind, and see if I stumble across it.

Austin



Article: 51672
Subject: Re: Student development board
From: Kevin Brace <kev0inbrac1eusen2et@ho3tmail.c4om>
Date: Sat, 18 Jan 2003 09:59:55 -0600
Links: << >>  << T >>  << A >>
I also agree that prototyping with XC4000 series is not recommended for
new designs, but now the backend tool for XC4000 series has become free.

http://www.xilinx.com/ise_classics/index.htm


But ISE Classics doesn't come with a synthesis tool, so without it, it's
pretty much useless.


Kevin Brace (If someone wants to respond to what I wrote, I prefer if
you will do so within the newsgroup.)



Will wrote:
> 
> 
> Well, I do not recommend prototyping with the XC4000 series as they are
> old parts. The other thing is that you need Foundation 2.i to make
> bitstreams for the XC4000, and Foundation is not free. If you are
> still curious about the wiring, then drop me a note. Actually it's pretty
> simple: just wire all the VCC pins, all the GND pins, the JTAG pins,
> provide a clock, then you are in business.

Article: 51673
Subject: Multi Project DIE
From: "Jerry" <nospam@nowhere.com>
Date: Sat, 18 Jan 2003 12:33:35 -0500
Links: << >>  << T >>  << A >>
Well after some time spent searching for Multi project wafers and getting
some quotes I have come to the conclusion that
MPW is great for proof of concept and prototyping but it is not economical
feasible for production. Since our die is around
8 mm on a side we would have to buy four sites @ 5mm on a side. This drives
the cost to around $400 a die. Cheaper than doing
it in a FPGA but does drive up  the cost of the product.

During the search I came across the term multi project die (MPD). From the
info I have been able to find it appears that
the concept is the same for MPD as for MPW.

I have two companies that appear to offer MPD,
http://www.ime.org.sg/circuit/cir_alpha.htm and
http://www.cmc.ca/about/program/fabrication.html

It appears CMC is for Canadian schools and companies. I have a request for
info to IME.

It can't be that my company is alone in this dilemma of not enough volume to
get the attention of the IC fab houses plus an absolute
necessity of requiring an ASIC to perform the system functions. We can proto
in FPGAs but the cost of FPGA in production is
very high and the power consumption of an FPGA compared to an ASIC makes it
impractical in our power constrained application. Yes we have looked at a
hard wired FPGA. Still expensive and power hungry.

Has anybody first hand knowledge of doing a MPD? Does anybody want (need) to
do a MPD? At this point we are flexible in schedule and packaging
requirements.

Some issues I have considered:
    1. Commercial IP. Maybe we could share the IP and reduce the insertion
cost. The legal people may have a field day with this.
    2. What happens if one of the parties drops out? That tends to put the
rest of the parties back in the situation they were trying to avoid.
   3. Pin out, mostly in the pull ups and downs. I would think power and
ground would be easy to agree to. Some of the more
       exotic IO standards could be a stumbling block.
   4. Scheduling tape out and fab.
   5. How to ratio the unit cost. Should everyone pay the same price even
though party's A circuit is one tenth the size of everyone else?
   6. Who integrates the designs?
   7. clock rate may come into play due to placement constraints and PLL
design.

Advantages: similar to the MPW as far as mask cost goes. Ability to get an
ASIC with low volumes at prices that may work.

Maybe this would work, maybe not. I, and I hope alot of other people would
be interested in further discussions and implementation.

As .09 micron comes on line this year and .064 micron is investigated I
think the situation is only going to get worst for
products that require ASICs but whose constraints (cost and power) preclude
the use of an FPGA.

regards
Jerry








Article: 51674
Subject: Re: Schematic design approach compared to VHDL entry approach
From: Spam Hater <spam_hater_7@email.com>
Date: Sat, 18 Jan 2003 18:05:06 GMT
Links: << >>  << T >>  << A >>

A fair request.  For example, you need to know that:

CSWR_n <= WR_n when (CS='1') else '1';
-and-
CSWR_n <= not(not(WR_n) and CS);

Will produce different logic.  (and one will glitch, one will not)

The 'big' synthesizers give you tools to find out exactly what they're
doing.  The little ones do not.

And all of them treat this information as 'trade secrets', so you get
to find out what they're doing by yourself.

The good news:  After a while, this won't bug you so much anymore.
With experience, you'll be able to "see" the logic.

SH7

On Thu, 16 Jan 2003 00:04:39 -0500, "Austin Franklin"
<austin@da98rkroom.com> wrote:
>
>I don't want to run the tools and play guessing games with what the tool is
>going to output.  It should be documented, so when I code, I KNOW what the
>hardware will be.
>
>Austin
>




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search