Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 107050

Article: 107050
Subject: Re: Xilinx ML501 availability
From: Tommy Thorn <foobar@nowhere.void>
Date: Wed, 23 Aug 2006 21:43:04 -0700
Links: << >>  << T >>  << A >>
Ed McGettigan wrote:
> We have hundreds of these going through production builds right now and
> they will be available for sale in September.

Thanks Ed.

Can you say if these are "Cinderella boards", that is, if the design 
software magically stops working after some period or will the Virtex-5 
XC5VLX50 be supported by the ISE WebPACK before that happens?

Thanks,
Tommy

Article: 107051
Subject: Style of coding complex logic (particularly state machines)
From: "Eli Bendersky" <eliben@gmail.com>
Date: 23 Aug 2006 22:28:46 -0700
Links: << >>  << T >>  << A >>
Hello all,

In a recent thread (where the O.P. looked for a HDL "Code Complete"
substitute) an interesting discussion arised regarding the style of
coding state machines. Unfortunately, the discussion was mostly
academic without much real examples, so I think there's place to open
another discussion on this style, this time with real examples
displaying the various coding styles. I have also cross-posted this to
c.l.vhdl since my examples are in VHDL.

I have written quite a lot of VHDL (both for synthesis and simulation
TBs) in the past few years, and have adopted a fairly consistent coding
style (so consistent, in fact, that I use Perl scripts to generate some
of my code :-). My own style for writing complex logic and state
machines in particular is in separate clocked processes, like the
following:


type my_state_type is
(
  wait,
  act,
  test
);

signal my_state: my_state_type;
signal my_output;

...
...

my_state_proc: process(clk, reset_n)
begin
  if (reset_n = '0') then
    my_state <= wait;
  elsif (rising_edge(clk))
    case my_state is
      when wait =>
        if (some_input = some_value) then
          my_state <= act;
        end if;
        ...
        ...
      when act =>
        ...
      when test =>
        ...
      when others =>
        my_state <= wait;
     end case;
  end if;
end process;

my_output_proc: process(clk, reset_n)
begin
  if (reset_n = '0') then
    my_output <= '0';
  elsif (rising_edge(clk))
    if (my_state = act and some_input = some_other_val) then
      ...
    else
      ...
   end if;
  end if;
end process;


Now, people were referring mainly to two styles. One is variables used
in a single big process, with the help of procedures (the style Mike
Tressler always points to in c.l.vhdl), and another style - two
processes, with a combinatorial process.

It would be nice if the proponents of the other styles presented their
ideas with regards to the state machine design and we can discuss the
merits of the approaches, based on real code and examples.

Thanks
Eli


Article: 107052
Subject: Re: uclinux on spartan-3e starter kit
From: "Antti Lukats" <antti@openchip.org>
Date: Thu, 24 Aug 2006 07:49:40 +0200
Links: << >>  << T >>  << A >>
"John Williams" <john.williams@petalogix.com> schrieb im Newsbeitrag 
news:newscache$q14h4j$xmi$1@lbox.itee.uq.edu.au...
> Hi David,
>
> David wrote:
>
>> i am looking for any step by step guide to use uclinux on the starter
>> kit.
>
> A Linux-ready reference design for the Spartan3E500 starter kit is 
> available now
> from PetaLogix:
>
> http://www.petalogix.com/news_events/Spartan3E500-ref-design
>
> Regards,
>
> John
> -- 
> Dr John Williams
> www.PetaLogix.com
> (p) +61 7 33652185  (f) +61 7 33654999
>
> PetaLogix is a trading name of UniQuest Pty Ltd

Thanks Jihn

the 1600 thing was really useless, as you seem to be only one havng those 
s3e-1600
boards despite the fact that Xilinx website says that 'coming in June'

Antti 



Article: 107053
Subject: Re: Style of coding complex logic (particularly state machines)
From: backhus <nix@nirgends.xyz>
Date: Thu, 24 Aug 2006 08:04:02 +0200
Links: << >>  << T >>  << A >>
Hi Eli,
discussion about styles is not really satisfying. You find it in this 
newsgroup again and again, but in the end most people stick to the style 
they know best. Style is a personal queastion than a technical one.

Just to give you an example:
The 2-process -FSM you gave as an example always creates the registered 
outputs one clock after the state changes. That would drive me crazy 
when checking the simulation.

Why are you using if-(elsif?) in the second process? If you have an 
enumerated state type you could use a case there as well. Would look 
much nicer in the source, too.

Now... Will you change your style to overcome these "flaws" or are you 
still satisfied with it, becaused you are used to it?

Both is OK. :-)

Anyway, each style has it's pros and cons and it always depends on what 
you want to do.
-- has the synthesis result to be very fast or very small?
-- do you need to speed up your simulation
-- do you want easy readable sourcecode (that also is very personal, 
what one considers "readable" may just look like greek to someone else)
-- etc. etc.

So, there will be no common consensus.

Best regards
   Eilert



Eli Bendersky schrieb:
> Hello all,
> 
> In a recent thread (where the O.P. looked for a HDL "Code Complete"
> substitute) an interesting discussion arised regarding the style of
> coding state machines. Unfortunately, the discussion was mostly
> academic without much real examples, so I think there's place to open
> another discussion on this style, this time with real examples
> displaying the various coding styles. I have also cross-posted this to
> c.l.vhdl since my examples are in VHDL.
> 
> I have written quite a lot of VHDL (both for synthesis and simulation
> TBs) in the past few years, and have adopted a fairly consistent coding
> style (so consistent, in fact, that I use Perl scripts to generate some
> of my code :-). My own style for writing complex logic and state
> machines in particular is in separate clocked processes, like the
> following:
> 
> 
> type my_state_type is
> (
>   wait,
>   act,
>   test
> );
> 
> signal my_state: my_state_type;
> signal my_output;
> 
> ...
> ...
> 
> my_state_proc: process(clk, reset_n)
> begin
>   if (reset_n = '0') then
>     my_state <= wait;
>   elsif (rising_edge(clk))
>     case my_state is
>       when wait =>
>         if (some_input = some_value) then
>           my_state <= act;
>         end if;
>         ...
>         ...
>       when act =>
>         ...
>       when test =>
>         ...
>       when others =>
>         my_state <= wait;
>      end case;
>   end if;
> end process;
> 
> my_output_proc: process(clk, reset_n)
> begin
>   if (reset_n = '0') then
>     my_output <= '0';
>   elsif (rising_edge(clk))
>     if (my_state = act and some_input = some_other_val) then
>       ...
>     else
>       ...
>    end if;
>   end if;
> end process;
> 
> 
> Now, people were referring mainly to two styles. One is variables used
> in a single big process, with the help of procedures (the style Mike
> Tressler always points to in c.l.vhdl), and another style - two
> processes, with a combinatorial process.
> 
> It would be nice if the proponents of the other styles presented their
> ideas with regards to the state machine design and we can discuss the
> merits of the approaches, based on real code and examples.
> 
> Thanks
> Eli
> 

Article: 107054
Subject: Checking syntax
From: "=?iso-8859-1?B?R2FMYUt0SWtVc5k=?=" <taileb.mehdi@gmail.com>
Date: 23 Aug 2006 23:13:24 -0700
Links: << >>  << T >>  << A >>
Hi all!
Is there a way to check the syntax of an HDL file using Xilinx command
line tools?

Thanks in advance


Article: 107055
Subject: Re: Open source Xilinx JTAG Programmer released on sourceforge.net
From: Andreas Ehliar <ehliar@lysator.liu.se>
Date: Thu, 24 Aug 2006 06:14:31 +0000 (UTC)
Links: << >>  << T >>  << A >>
On 2006-08-23, zcsizmadia@gmail.com <zcsizmadia@gmail.com> wrote:
> This version supports only Digilent USB programmer cable!
> Unfortunately I don't have money to buy a Spartan-3E Starter Kit and
> reverse engineer the USB protocoll.

No need to do that, the S3E starter kit is supported by Bryan's
modified XC3Sprog, available at http://inisyn.org/src/xup/ .

/Andreas

Article: 107056
Subject: Re: esoteric hardware?
From: backhus <nix@nirgends.xyz>
Date: Thu, 24 Aug 2006 08:31:55 +0200
Links: << >>  << T >>  << A >>
Frank Buss schrieb:
> hypermodest wrote:
> 
>> By the way, does anybody here have enough sense of humor to build
>> esoteric hardware using FPGA?
> 
> I don't know what you mean, maybe some useless hardware projects for fun?
> What about building a Theremin:
> 
> http://en.wikipedia.org/wiki/Theremin
> http://www.youtube.com/watch?v=eNl8qq-f1F0
> 
> I think with a FPGA it should be possible to produce much more interesting
> effects.
> 
> Or plug-in a MIDI keyboard and implement a synthesizer, which simulates
> instruments physically correct. Output could be simply PWM (a FPGA is fast
> enough for a high output resolution) with an RC filter.
> 
> http://www.fpga4fun.com/PWM_DAC.html
> 
Hi Frank,
I also wonder what is meant by "esoteric". The things you propose are 
complicated, but (in my understanding) not really weird. (Still kind of 
fun to do, if you have the time.)

The examples Hypermodest gave are from the field of programming 
languages, not applications. That's the main difference between the 
example and his request.

For programming languages, there exists a wide range of well and easy 
understandable variations (Fortran, Pascal, C, Java...) and then there 
are things like brainfuck & co which are weird due to their limited 
syntax, but still turing complete. And that's considerably weird.

So what exits (or does not and is still to be discovered) for FPGAs that 
can be considered weird?
One thing that comes to my mind is asynchronous design. Always leads to 
a lot of discussions. Especially when it shall be applied to FPGAs, 
because these are not designed to support such stuff. But what if 
someone creates a piece of asynchronous logic (at least some kind of 
FSM) that can be prooved(!) to run under all allowed conditions for that 
  kind of chip.

Something else can be doing analog stuff with FPGAS. For instance, I 
remember a thing that was done on some old Computers. The programmers 
wrote a nonsense programm that caused the computer to send some classic 
Music over a radio frequency, which could be heared with an average 
receiver in the room. Problem today: low energy and very short 
connection length.

But today we are much advanced...every good lab today has some infrared 
camera...or not? :-)
How about controlled heat generation inside the Chip that creates 
pictures on the chip case, detectable by an IR-Camera?

Best regards
   Eilert

Article: 107057
Subject: Re: CPU design
From: Frank Buss <fb@frank-buss.de>
Date: Thu, 24 Aug 2006 08:44:44 +0200
Links: << >>  << T >>  << A >>
jacko wrote:

> AHDL for a two register NOP, INC, DEC, WRITE unit
> 
> http://indi.joox.net link to quartus II files, BIREGU.bdf
> 
> good for interruptable stack pointers

This looks like a net list or something like this. I have only ISE WebPack
installed and I don't know how to display it. Do you have a picture of it?

-- 
Frank Buss, fb@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de

Article: 107058
Subject: Re: esoteric hardware?
From: Frank Buss <fb@frank-buss.de>
Date: Thu, 24 Aug 2006 08:59:38 +0200
Links: << >>  << T >>  << A >>
backhus wrote:

> One thing that comes to my mind is asynchronous design. Always leads to 
> a lot of discussions. Especially when it shall be applied to FPGAs, 
> because these are not designed to support such stuff. But what if 
> someone creates a piece of asynchronous logic (at least some kind of 
> FSM) that can be prooved(!) to run under all allowed conditions for that 
>   kind of chip.

Another idea would be to implement John von Neumann's Universal
Constructor:

http://www.sq3.org.uk/Evolution/JvN/

The web site says, it needs weeks to compute the replicatation, but with a
FPGA it should be done in seconds.

The resoultion for the cellular automaton in the program was 640x1640, so I
think you need many FPGAs for true parallel processing, but would be
interesting and esoteric like the brainfuck programming language :-)

-- 
Frank Buss, fb@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de

Article: 107059
Subject: Re: fastest FPGA
From: Kolja Sulimma <news@sulimma.de>
Date: Thu, 24 Aug 2006 09:17:11 +0200
Links: << >>  << T >>  << A >>
hypermodest schrieb:

> I cannot do any cryptanalysis yet.. And I need to test about 2^48
> values. In another words, I need counter, I need block taking counter
> value at input and generating hashed value at output, I need comparator
> to test if the result is correct and I need something to stop all this
> stuff and begin to yell :-)
So to get that job done in a week you need to evaluate 500 Million hash
functions a second?

These are 48-bit hashes? What is your hash function?

MD5 is almost 1gpbs in about 1000 luts.
You need something like 25 instances of that.
Easy.

Kolja Sulimma

Article: 107060
Subject: Re: DCM vs. PLL
From: Martin Thompson <martin.j.thompson@trw.com>
Date: 24 Aug 2006 10:24:28 +0100
Links: << >>  << T >>  << A >>
"Antti" <Antti.Lukats@xilant.com> writes:

> the OP desing looks very much like CameraLink.
> 
> so the incoming clock would be multiplied by x7 to get bit clock.
> 
> it is doable with Virtex and DCM, but for what I would suggest
> (if it is CameraLink) is still to use the dedicated deserializer.
> 

We have a Virtex-II design that does Camera link reception.

We wanted the same board to do input and output (not at the same time -
over the same connector), and the board space for both directions of
serialisers as well as the discrete bits for the Ser_TFG and Ser_TC
and CCx signals was too large.

So we put a 2V40 down and have two bitstreams.

It works well, even in a car - we use it to extract lane markings and
provide feedback to the driver about unintentional lane-changes.

After all that, the Tx part never quite got around to being used, but
we have the code for it...

Cheers,
Martin

-- 
martin.j.thompson@trw.com 
TRW Conekt - Consultancy in Engineering, Knowledge and Technology
http://www.conekt.net/electronic-hardware.html

   

Article: 107061
Subject: Re: esoteric hardware?
From: Andreas Ehliar <ehliar@lysator.liu.se>
Date: Thu, 24 Aug 2006 09:33:51 +0000 (UTC)
Links: << >>  << T >>  << A >>
On 2006-08-24, Frank Buss <fb@frank-buss.de> wrote:
> The resoultion for the cellular automaton in the program was 640x1640, so I
> think you need many FPGAs for true parallel processing, but would be
> interesting and esoteric like the brainfuck programming language :-)

You don't even need an FPGA for a brainfuck processor, some students here
did a brainfuck processor using CPLDs. (Well, actually they said it was an
Ook! processor, but the languages map one to one to each other.)

Since they are only allowed to use CPLD:s and plain 74xx logic in this course,
suffice to say that it is beneficial to design a processor as simple as 
possible :) Most people design more advanced processors though. The advantage
with the Ook! processor was that there was some readily available programs
already written for it :)

/Andreas

Article: 107062
Subject: Block RAM vs Flip Flop
From: "Sandro" <sdroamt@netscape.net>
Date: 24 Aug 2006 02:41:34 -0700
Links: << >>  << T >>  << A >>
In the following (use a font with fixed width to display it rightly)

                   +-------------/--------+
                   |            32        |
                   |     ___              |
                   +----|   \             |
                        |    |      __    |
           __            > + |-----|  |   |
data_in---|  |          |    |     |  |---+---data_out
          |  |-----/----|___/   +-->__|
     clk-->__|    32     ADD    |   XX
           FD                   |
                               clk

when XX = 32 FD:
Timing Summary:
---------------
Speed Grade: -4
   Minimum period: 5.938ns (Maximum Frequency: 168.407MHz)
   Minimum input arrival time before clock: 1.825ns
   Maximum output required time after clock: 7.241ns
   Maximum combinational path delay: No path found

when XX = RAMB16_S36 with  WE => '1' ,SSR => '0' :
Timing Summary:
---------------
Speed Grade: -4
   Minimum period: No path found
   Minimum input arrival time before clock: 1.825ns
   Maximum output required time after clock: No path found
   Maximum combinational path delay: No path found


Does "Minimum period: No path found" here suggest that is good
idea to not use only RamBlock as sequential element in
spite of it's clock signal?

If Yes... is good to use a RamBlock and, after it, 32 transparent
latch or is better to use RamBlock and, after it, 32 flipflop ?

Regards
Sandro


Article: 107063
Subject: Re: Block RAM vs Flip Flop
From: Falk Brunner <Falk.Brunner@gmx.de>
Date: Thu, 24 Aug 2006 12:21:09 +0200
Links: << >>  << T >>  << A >>
Sandro schrieb:
> In the following (use a font with fixed width to display it rightly)
> 
>                    +-------------/--------+
>                    |            32        |
>                    |     ___              |
>                    +----|   \             |
>                         |    |      __    |
>            __            > + |-----|  |   |
> data_in---|  |          |    |     |  |---+---data_out
>           |  |-----/----|___/   +-->__|
>      clk-->__|    32     ADD    |   XX
>            FD                   |
>                                clk
> 
> when XX = 32 FD:
> Timing Summary:
> ---------------
> Speed Grade: -4
>    Minimum period: 5.938ns (Maximum Frequency: 168.407MHz)
>    Minimum input arrival time before clock: 1.825ns
>    Maximum output required time after clock: 7.241ns
>    Maximum combinational path delay: No path found
> 
> when XX = RAMB16_S36 with  WE => '1' ,SSR => '0' :
> Timing Summary:
> ---------------
> Speed Grade: -4
>    Minimum period: No path found
>    Minimum input arrival time before clock: 1.825ns
>    Maximum output required time after clock: No path found
>    Maximum combinational path delay: No path found
> 
> 
> Does "Minimum period: No path found" here suggest that is good
> idea to not use only RamBlock as sequential element in
> spite of it's clock signal?

I dont understand this sentence.

> If Yes... is good to use a RamBlock and, after it, 32 transparent
> latch or is better to use RamBlock and, after it, 32 flipflop ?

BRAMS are slower than FlipFlops. But in your sentence I guess the tool 
"see" that the adress bus is static and do some optimizations (remove 
the BRAMs?), that why no minimum period is found.
Usually to speed up BRAM operation inputs and outputs are registered be 
fabric FlipFlops.

Regards
Falk

Article: 107064
Subject: Re: JOP as SOPC component
From: "KJ" <kkjennings@sbcglobal.net>
Date: Thu, 24 Aug 2006 10:36:45 GMT
Links: << >>  << T >>  << A >>

"Tommy Thorn" <tommy.thorn@gmail.com> wrote in message 
news:1156359725.967272.219130@74g2000cwt.googlegroups.com...
>A quick answer for this one:
>
>> rdy_cnt=1 sounds like it is allowing JOP on SimpCon to start up the
>> next transaction (read/write or twiddle thumbs) one clock cycle before
>> the read data is actually available.  But how is that different than
>> the Avalon slave dropping wait request one clock cycle before the data
>> is available and then asserting read data valid once the data actually
>> is available?
>
> The signal waitrequest has nothing to do with the output, but is
> property of the input. What you're suggesting is an "abuse" of Avalon
> and would only work for slaves that support only one outstanding
> transfer with a latency of exactly one.  Clearly incompatible with
> existing Avalon components.
>
Not at all an abuse of Avalon.  In fact it is the way waitrequest is 
intended to be used.  I'm not quite sure what you're referring to by input 
and output when you say "nothing to do with the output, but is property of 
the input" but what waitrequest is all about is to signal the end of the 
'address phase' of the transaction where 'address phase' are the clock 
cycle(s) where read and/or write are asserted along with address and 
writedata (if write is asserted).

Waitrequest is an output from a slave component that, when asserted, signals 
the Avalon fabric that the address and command inputs (and writedata if 
performing a write) needs to be held for another clock cycle.  Once the 
slave component no longer needs the address and command inputs it can drop 
waitrequest even if it has not actually completed the transaction.

The Avalon fabric 'almost' passes the waitrequest signal right back to the 
master device, the only change being that the Avalon logic basically gates 
the slave's waitrequest output with the slave's chipselect input (which the 
Avalon fabric creates) to form the master's waitrequest input (assuming a 
simple single master/slave connection for simplicity here).  Per Avalon, 
when an Avalon master sees it's waitrequest input asserted it simply must 
not change the state of the address, read, write or writedata outputs on 
that particular clock cycle.  When the Avalon master is performing a read or 
write and it sees waitrequest not asserted it is free to start up another 
transaction on the next clock cycle.  In particular, if the first 
transaction was a read, this means that the 'next' transaction can be 
started even though the data has not yet been returned from the first read. 
For a slave device that has a readdatavalid output signal Avalon does not 
define any min/max time for when readdatavalid must come back just that for 
each read that has been accepted by the slave (i.e. one with read asserted, 
waitrequest not asserted) there must be exactly one cycle with readdatavalid 
asserted flagging the readdata output as having valid data.

During a read, Avalon allows the delay between the clock cycle with "read 
and not(waitrequest)" and the eventual clock cycle with "readatavalid" to be 
either fixed or variable.  If fixed, then SOPC Builder allows the fixed 
latency number to be entered into the class.ptf file for the slave and no 
readdatavalid output from the slave is required.  All that does though is 
cause SOPC Builder to synthesize the code itself to generate readdatavalid 
as if it came from the slave code itself.  If the readdatavalid output IS 
part of the component then SOPC Builder allows the latency delay to be 
variable; whether it actually is or not is up to the slave's VHDL/Verilog 
design code.  Bottom line is that Avalon does have a mechanism built right 
into the basic specification that allows a master device to start up another 
read or write cycle one clock cycle prior to readdata actually having been 
provided.

Given the description that Martin posted on how his SimpCon interface logic 
works it 'appears' that he believes that this ability to start up another 
cycle prior to completion (meaning the data from the read has actually been 
returned) is what is giving SimpCon the edge over Avalon.  At least that's 
how it appears to me, which is why I asked him to walk me through the 
transaction to find where I'm missing something.  My basic confusion is not 
understanding just exactly where in the read transaction does SimpCon 'pull 
ahead' of Avalon and give 'JOP on SimpCon' the performance edge over 'JOP on 
Avalon'.

Anyway, hopefully that explains why it's not abusing Avalon in any way.

KJ 



Article: 107065
Subject: Re: Block RAM vs Flip Flop
From: "Sandro" <sdroamt@netscape.net>
Date: 24 Aug 2006 03:49:38 -0700
Links: << >>  << T >>  << A >>
Falk Brunner wrote:
> ...
> I dont understand this sentence.

sorry for my ugly english...

> ...
> BRAMS are slower than FlipFlops. But in your sentence I guess the tool
> "see" that the adress bus is static and do some optimizations (remove
> the BRAMs?), that why no minimum period is found.
> Usually to speed up BRAM operation inputs and outputs are registered be
> fabric FlipFlops.

Don't seems XST remove the BRAM... and the address bus is not
static (I done some trial with 32 flip flop  after seeing the ?problem?
with ram block) :

Device utilization summary:
---------------------------
Selected Device : 3s200ft256-4
 Number of Slices:                      34  out of   1920     1%
 Number of 4 input LUTs:                32  out of   3840     0%
 Number of IOs:                         75
 Number of bonded IOBs:                 74  out of    173    42%
    IOB Flip Flops:                     32
 Number of BRAMs:                        1  out of     12     8%
 Number of GCLKs:                        1  out of      8    12%

bye
Sandro


Article: 107066
Subject: Re: USB PHYs and drivers that folks have used
From: "KJ" <kkjennings@sbcglobal.net>
Date: Thu, 24 Aug 2006 11:04:46 GMT
Links: << >>  << T >>  << A >>
Antti, thanks for your 2 cents...and for figuring out that I'm talking about 
I'm talking about an FPGA implementation in the first place.

>> Perusing OpenCores I see the USB 2.0 Function IP Core which seems like
>> it should work for the device side.  Some questions:
>
> dont count on that.
> the core is used as basis for commercial core, yes. but the OpenCores
> version is not able to pass compliance testing IMHO
>
Noted, in specific areas of concern?

>> - What UTMI PHYs have people used with this core and can say that they
>> work.  The parts listed in the OpenCores document are no longer
>> available, but the document is also 4 years old.
>> - Any uCLinux device drivers available to support all of this?
>> - Any other good/bad things to say about this core?
>>
>> Also on OpenCores is a USB 1.1 Host and Device IP core which would work
>> for my USB host interface.
>> - What USB transceivers have people used and can say they work.  The
>> OpenCores document lists a Fairchild USB1T11A and a Philips ISP1105,
>> both of which are still available.  Anything good/bad to say about
>> either of these parts?
>> - Any uCLinux device drivers available to support all of this?
>
> OC FS host-dev core has uClinux 2.6 drivers for NIOS-II

OK, I'll look harder, upon cursory googling I didn't run across anyone 
particularly advertising this.

>
>> - Any other recommendations for transeivers?
>>
> doesnt really matter, most of them are useable actually.
> for HS ULPI is way better as it takes less wiring.

I agree, but haven't found any ULPI interface implementations.  Know of any?

>
>> For either interface, does anyone have any recommendations on other IP
>> cores (do not have to be 'free' cores) that have been used and tested
>> and have a uCLinux device driver available that should be considered?
>>

> for FPGA resource utilization the price for USB HS core is just way above
> dedicated silicon. eg adding a HS host-device chip to FPGA is cheaper
> than adding the PHYs an having an FPGA USB core.

In my case though the number of I/O in the system is the gating factor and 
will most likely end up determining the particular FPGA solution.  With that 
assumption then, the functionality that one decides to embeds into the FPGA 
is based on how many system I/O pins can be directly controlled, and if not 
directly, then how cheap of an external part do you hang off the FPGA to 
implement the entire function.  At present estimation, I'm not increasing 
the size of the FPGA in order to get more logic resources so whatever can be 
crammed in which minimizes overall FPGA I/O count and minimizes external 
parts cost is the driver.

>
> whatever FPGA USB IP core you choose, you end up spending 3 to 12
> months with verification and compliance testing.

Starting from a field proven core, reference design and driver though I'm 
off to a good start.  I've done USB device on a board before with commercial 
silicon, just not yet had to deal with the USB functionality being in an 
FPGA and how stable/unstable/tested/qualified that may or may not entail.

OpenCores.org and googling did not give any such assurances which is my 
reason for querying for people who may have actually used the above 
mentioned cores or suggest ones that they have used and qualified for use in 
a product.

KJ 



Article: 107067
Subject: ANN: MicroBlaze platform simulator XSIM ver 1.1 released
From: "Antti" <Antti.Lukats@xilant.com>
Date: 24 Aug 2006 04:11:38 -0700
Links: << >>  << T >>  << A >>
XSIM 1.1: MicroBlaze platform simulator released:

free download (no registration required):
http://xilant.com/index.php?option=com_remository&Itemid=36&func=fileinfo&id=3

included are many ready to run MB-uClinux images,
u-boot demo and standalone application demos.
VGA graphics from standalone apps and u-boot,
uClinux microwindows tuxchess, ...

new in ver 1.1
* Pictiva OLED display support
* Partial support for MicroBlaze ver 5

fixed
* can execute petalogix uclinux demo image


Antti Lukats
http://xilant.com


Article: 107068
Subject: Re: Block RAM vs Flip Flop
From: Falk Brunner <Falk.Brunner@gmx.de>
Date: Thu, 24 Aug 2006 14:03:26 +0200
Links: << >>  << T >>  << A >>
Sandro schrieb:

>>BRAMS are slower than FlipFlops. But in your sentence I guess the tool
>>"see" that the adress bus is static and do some optimizations (remove
>>the BRAMs?), that why no minimum period is found.
>>Usually to speed up BRAM operation inputs and outputs are registered be
>>fabric FlipFlops.
> 
> 
> Don't seems XST remove the BRAM... and the address bus is not
> static (I done some trial with 32 flip flop  after seeing the ?problem?
> with ram block) :
> 
> Device utilization summary:
> ---------------------------
> Selected Device : 3s200ft256-4
>  Number of Slices:                      34  out of   1920     1%
>  Number of 4 input LUTs:                32  out of   3840     0%
>  Number of IOs:                         75
>  Number of bonded IOBs:                 74  out of    173    42%
>     IOB Flip Flops:                     32
>  Number of BRAMs:                        1  out of     12     8%
>  Number of GCLKs:                        1  out of      8    12%
> 
> bye
> Sandro
> 

Ok, but IOB FlipFlops are not optimal for this test, since the are 
located "far" away from the BRAMs. You should disable the option for 
using IOB FlipFlops. Look at the timing report, this shoudl give you 
some hints for the missing minimum period.

Regards
Falk

Article: 107069
Subject: Re: Block RAM vs Flip Flop
From: "Sandro" <sdroamt@netscape.net>
Date: 24 Aug 2006 05:37:41 -0700
Links: << >>  << T >>  << A >>

Falk Brunner wrote:
> Sandro schrieb:
> > ...
> > Device utilization summary:
> > ---------------------------
> > Selected Device : 3s200ft256-4
> >  Number of Slices:                      34  out of   1920     1%
> >  Number of 4 input LUTs:                32  out of   3840     0%
> >  Number of IOs:                         75
> >  Number of bonded IOBs:                 74  out of    173    42%
> >     IOB Flip Flops:                     32
> >  Number of BRAMs:                        1  out of     12     8%
> >  Number of GCLKs:                        1  out of      8    12%
> > ...
>
> Ok, but IOB FlipFlops are not optimal for this test, since the are
> located "far" away from the BRAMs. You should disable the option for
> using IOB FlipFlops. Look at the timing report, this shoudl give you
> some hints for the missing minimum period.

1) the 32 IOB FLipFlop here are for one input to the adder... the other
input of
    the  adder is only from the BRAM
2) Disabling the "Pack I/O Registers/Latches into IOBs" is a MAP
option...
    the report report  is a XST report.

Sandro


Article: 107070
Subject: Re: DCM vs. PLL
From: robnstef@frontiernet.net
Date: 24 Aug 2006 07:02:26 -0700
Links: << >>  << T >>  << A >>
Peter,

Yes, I agree, the interface is straightforward.  Which is why I was
promted to post and try to get a feel for the DCM, since I don't have
much experience with it.

I do appreciate the feedback and will push a little harder on this
group to get a better answer as to why they feel the V2PRO30 can't
handle this interface.

Take care and thank you for the response,
Rob


Peter Alfke wrote:
> It looks pretty straightforward to me.
> Three incoming ~45 MHz clocks, each multiplied by 7 in its own DCM,
> triggering the input DDR registers. You have to do a little bit of
> thinking, because 7 is an odd number, but I see no problem with that.
> Of course, it would be much simpler in Virtex-4, but that seems to be
> not your choice.
> Peter Alfke, Xilinx (from home)


Article: 107071
Subject: Re: Style of coding complex logic (particularly state machines)
From: "Eli Bendersky" <eliben@gmail.com>
Date: 24 Aug 2006 07:34:26 -0700
Links: << >>  << T >>  << A >>

backhus wrote:
> Hi Eli,
> discussion about styles is not really satisfying. You find it in this
> newsgroup again and again, but in the end most people stick to the style
> they know best. Style is a personal queastion than a technical one.
>
> Just to give you an example:
> The 2-process -FSM you gave as an example always creates the registered
> outputs one clock after the state changes. That would drive me crazy
> when checking the simulation.

I guess this indeed is a matter of style. It doesn't drive me crazy
mostly because I'm used to it. Except in rare cases, this single clock
cycle doesn't change anything. However, the benefit IMHO is that the
separation is cleaner, especially when a lot of signals depend on the
state.

>
> Why are you using if-(elsif?) in the second process? If you have an
> enumerated state type you could use a case there as well. Would look
> much nicer in the source, too.

I prefer to use if..else if there is only one "if". When there are
"elsif"s, case is preferable.

>
> Now... Will you change your style to overcome these "flaws" or are you
> still satisfied with it, becaused you are used to it?
>
> Both is OK. :-)
>
> Anyway, each style has it's pros and cons and it always depends on what
> you want to do.
> -- has the synthesis result to be very fast or very small?
> -- do you need to speed up your simulation
> -- do you want easy readable sourcecode (that also is very personal,
> what one considers "readable" may just look like greek to someone else)
> -- etc. etc.
>
> So, there will be no common consensus.
>

In my original post I had no intention to reach a common consensus. I
wanted to see practical code examples which demonstrate the various
techniques and discuss their relative merits and disadvantages.

Kind regards,
Eli



> Eli Bendersky schrieb:
> > Hello all,
> >
> > In a recent thread (where the O.P. looked for a HDL "Code Complete"
> > substitute) an interesting discussion arised regarding the style of
> > coding state machines. Unfortunately, the discussion was mostly
> > academic without much real examples, so I think there's place to open
> > another discussion on this style, this time with real examples
> > displaying the various coding styles. I have also cross-posted this to
> > c.l.vhdl since my examples are in VHDL.
> >
> > I have written quite a lot of VHDL (both for synthesis and simulation
> > TBs) in the past few years, and have adopted a fairly consistent coding
> > style (so consistent, in fact, that I use Perl scripts to generate some
> > of my code :-). My own style for writing complex logic and state
> > machines in particular is in separate clocked processes, like the
> > following:
> >
> >
> > type my_state_type is
> > (
> >   wait,
> >   act,
> >   test
> > );
> >
> > signal my_state: my_state_type;
> > signal my_output;
> >
> > ...
> > ...
> >
> > my_state_proc: process(clk, reset_n)
> > begin
> >   if (reset_n = '0') then
> >     my_state <= wait;
> >   elsif (rising_edge(clk))
> >     case my_state is
> >       when wait =>
> >         if (some_input = some_value) then
> >           my_state <= act;
> >         end if;
> >         ...
> >         ...
> >       when act =>
> >         ...
> >       when test =>
> >         ...
> >       when others =>
> >         my_state <= wait;
> >      end case;
> >   end if;
> > end process;
> >
> > my_output_proc: process(clk, reset_n)
> > begin
> >   if (reset_n = '0') then
> >     my_output <= '0';
> >   elsif (rising_edge(clk))
> >     if (my_state = act and some_input = some_other_val) then
> >       ...
> >     else
> >       ...
> >    end if;
> >   end if;
> > end process;
> >
> >
> > Now, people were referring mainly to two styles. One is variables used
> > in a single big process, with the help of procedures (the style Mike
> > Tressler always points to in c.l.vhdl), and another style - two
> > processes, with a combinatorial process.
> >
> > It would be nice if the proponents of the other styles presented their
> > ideas with regards to the state machine design and we can discuss the
> > merits of the approaches, based on real code and examples.
> > 
> > Thanks
> > Eli
> >


Article: 107072
Subject: Why No Process Shrink On Prior FPGA Devices ?
From: "tweed_deluxe" <cmusial@comcast.net>
Date: 24 Aug 2006 07:35:42 -0700
Links: << >>  << T >>  << A >>
I'm wondering what intrinsic ecomomic, technical, or "other" barriers
have precluded FPGA device vendors from taking this step.   In other
words, why are there no advertised, periodic refreshes of older
generation FPGA devices.

In the microprocessor world, many vendors have established a long and
succesful history of developing a pin compatible product roadmap for
customers.  For the most part, these steps have allowed customers to
reap periodic technology updates without incurring the need to perform
major re-work on their printed circuit card designs or underlying
software.

On the Xilinx side of the fence there appears to be no such parallel.
Take for example, Virtex-II Pro.  This has been a proven work-horse for
many of our designs.    It takes quite a bit of time to truly
understand and harness all of the capabilities and features offered by
a platform device like this.      After making the investment to
develop IP and hardware targeted at this technology, is it unreasonable
to expect a forward looking roadmap that incorporates modest updates to
the silicon ?   A step that doesn't require a flow blown jump to a new
FPGA device family and subsequent re-work of the portfolio of hardware
and, very often, the related FPGA IP ?

Sure, devices like Virtex-5 offer capabilities that will be true
enablers for many customers (and for us at times as well).   But why
not apply a 90 or 65 nm process shrink to V2-Pro, provide modest speed
bumps to the MGT, along with minor refinements to the hardware
multipliers.  Maybe toss in a PLL for those looking to recover clocks
embedded in the MGT data stream etc.    And make the resulting devices
100% pin and code compatible with prior generations.

Perhaps I'm off in the weeds.  But, in our case, the ability to count
on continued refinement and update of a pin-comaptible products like
V2-Pro would result in more orders of Xilinx silicon as opposed to
fewer.

The absence of such refreshes in the FPGA world leads me to believe
that I must be naive.  So I am trying to understand where the logic is
failing.  Its just that there are times I wish the FPGA vendors could
more closely parallel what the folks in the DSP and micro-processor
world do ...


Article: 107073
Subject: high level languages for synthesis
From: Sanka Piyaratna <jayasanka.piyaratna@gmail.com>
Date: Fri, 25 Aug 2006 00:07:34 +0930
Links: << >>  << T >>  << A >>
Hi,

What is your opinion on high level languages such as systems C, handel-C 
etc. for FPGA development instead of VHDL/Verilog?

Sanka

Article: 107074
Subject: ISERDES strange simulation behaviour
From: "=?iso-8859-1?B?R2FMYUt0SWtVc5k=?=" <taileb.mehdi@gmail.com>
Date: 24 Aug 2006 07:50:46 -0700
Links: << >>  << T >>  << A >>
In the Virtex-4 user guide (ug070.pdf p.365 table 8-4) it is clearly
indicated that for INTERFACE_TYPE=NETWOKING and DATA_RATE=SDR the
latency should be 2 CLKDIV clock periods.
I instantiated an ISERDES of DATA_WIDTH=6 but I see that valid output
appears on the next CLKDIV rizing edge.
Any explanations?

Merci d'avance!

Here is my code:
--------------CUT HERE FILE SERDES_inst.v---------------
module ISERDES_inst(CLK,CLKDIV,D,CE1,SR,REV,
		    Q1,Q2,Q3,Q4,Q5,Q6);
   // I/O Ports
   // Inputs
   input CLK,CLKDIV,D,CE1,SR,REV;
   // Outputs
   output Q1,Q2,Q3,Q4,Q5,Q6;



   // ISERDES: Source Synchronous Input Deserializer
   //          Virtex-4
   // Xilinx HDL Language Template version 8.1i
   ISERDES #(
	     .BITSLIP_ENABLE("FALSE"), // TRUE/FALSE to enable bitslip
controller
	     .DATA_RATE("SDR"), // Specify data rate of "DDR" or "SDR"
	     .DATA_WIDTH(6), // Specify data width - For DDR 4,6,8, or 10
             //    For SDR 2,3,4,5,6,7, or 8
	     .INIT_Q1(1'b0), // INIT for Q1 register - 1'b1 or 1'b0
	     .INIT_Q2(1'b0), // INIT for Q2 register - 1'b1 or 1'b0
	     .INIT_Q3(1'b0), // INIT for Q3 register - 1'b1 or 1'b0
	     .INIT_Q4(1'b0), // INIT for Q4 register - 1'b1 or 1'b0
	     .INTERFACE_TYPE("NETWORKING"), // Use model - "MEMORY" or
"NETWORKING"
	     .IOBDELAY("NONE"), // Specify outputs where delay chain will be
applied
             //    "NONE", "IBUF", "IFD", or "BOTH"
	     .IOBDELAY_TYPE("DEFAULT"), // Set tap delay "DEFAULT", "FIXED",
or "VARIABLE"
	     .IOBDELAY_VALUE(0), // Set initial tap delay to an integer from 0
to 63
	     .NUM_CE(1), // Define number or clock enables to an integer of 1
or 2
	     .SERDES_MODE("MASTER"), // Set SERDES mode to "MASTER" or "SLAVE"
	     .SRVAL_Q1(1'b0), // Define Q1 output value upon SR assertion -
1'b1 or 1'b0
	     .SRVAL_Q2(1'b0), // Define Q2 output value upon SR assertion -
1'b1 or 1'b0
	     .SRVAL_Q3(1'b0), // Define Q3 output value upon SR assertion -
1'b1 or 1'b0
	     .SRVAL_Q4(1'b0) // Define Q4 output value upon SR assertion -
1'b1 or 1'b0
	     ) my_ISERDES_inst (
				.O(),    // 1-bit combinatorial output
				.Q1(Q1),  // 1-bit registered output
				.Q2(Q2),  // 1-bit registered output
				.Q3(Q3),  // 1-bit registered output
				.Q4(Q4),  // 1-bit registered output
				.Q5(Q5),  // 1-bit registered output
				.Q6(Q6),  // 1-bit registered output
				.SHIFTOUT1(), // 1-bit carry output
				.SHIFTOUT2(), // 1-bit carry output
				.BITSLIP(),     // 1-bit Bitslip input
				.CE1(CE1),        // 1-bit clock enable input
				.CE2(),        // 1-bit clock enable input
				.CLK(CLK),        // 1-bit clock input
				.CLKDIV(CLKDIV),  // 1-bit divided clock input
				.D(D),            // 1-bit serial data input
				.DLYCE(),    // 1-bit delay chain enable input
				.DLYINC(),  // 1-bit delay increment/decrement input
				.DLYRST(),  // 1-bit delay chain reset input
				.OCLK(),      // 1-bit high-speed clock input
				.REV(REV),        // 1-bit reverse SR input
				.SHIFTIN1(), // 1-bit carry input
				.SHIFTIN2(), // 1-bit carry input
				.SR(SR)           // 1-bit set/reset input
				);
   // End of ISERDES_inst instantiation

endmodule
-------------- END CUT HERE FILE SERDES_inst.v---------------

--------------CUT HERE FILE SERDES_inst_tb.v-----------------
`timescale 1ns / 1ps
module SERDES_inst_tb;

   // Inputs
   reg CLK,CLKDIV,D,CE1,SR,REV;

   // Outputs
   wire Q1,Q2,Q3,Q4,Q5,Q6;

   // Parameters
   parameter  CLK_PERIOD=2.85;
   parameter  CLKDIV_PERIOD=CLK_PERIOD*6;

   // Variables
   integer    i=0;

   // Instantiate the Unit Under Test (UUT)
   ISERDES_inst uut (
		     .CLK(CLK),
		     .CLKDIV(CLKDIV),
		     .D(D),
		     .CE1(CE1),
		     .SR(SR),
		     .REV(REV),
		     .Q1(Q1),
		     .Q2(Q2),
		     .Q3(Q3),
		     .Q4(Q4),
		     .Q5(Q5),
		     .Q6(Q6)
		      );
   always
     #(CLK_PERIOD/2) CLK=~CLK;
   always
     #(CLKDIV_PERIOD/2) CLKDIV=~CLKDIV;


   initial begin
      // Initialize Inputs
      CLK=1;
      CLKDIV=0;
      D=0;
      CE1=0;
      SR=0;
      REV=0;

      // Wait 100 ns for global reset to finish
      #100;

      // Add stimulus here
      // Testing availability of REV and the behaviour of SR
      @(negedge CLKDIV)#(0.5)REV=1;
      //#(CLKDIV_PERIOD);
      @(negedge CLKDIV)#(2)REV=0;
      #(2*CLKDIV_PERIOD);
      SR=1;
      #(CLKDIV_PERIOD)/*$stop*/;
      // Testing the ISERDES main functionality
      CE1=1;
      SR=0;
      @(posedge CLKDIV);
      @(negedge CLK);
      for(i=1;i<30;i=i+1)begin
	 D=$random;
	 #(CLK_PERIOD);
      end
      @(posedge CLKDIV)#(6*CLKDIV_PERIOD);
      CE1=0;
      $stop;

   end

endmodule
-------------- END CUT HERE FILE SERDES_inst_tb.v--------------




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search