Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 153325

Article: 153325
Subject: Re: Design Notation VHDL or Verilog?
From: Andy <jonesandy@comcast.net>
Date: Wed, 1 Feb 2012 05:31:10 -0800 (PST)
Links: << >>  << T >>  << A >>
On Jan 31, 2:00=A0pm, Petter Gustad <newsmailco...@gustad.com> wrote:
> Andy <jonesa...@comcast.net> writes:
> > there is negligible advantage to using VHDL over verilog. It is at
> > higher levels of abstraction, where design productivity is maximized,
> > that VHDL shines.
>
> It depends of the Verilog version in question and the type of design.
> SystemVerilog has a much higher level of abstraction when it comes to
> verification and testbench code. Especially when using libraries like
> UVM etc.
>
> //Petter
>
> --
> .sig removed by request.

AFAIK, the synthesizable subset of SV is pretty much plane old
verilog, with all its warts.

For verification, SV is indeed really nice. However, a new open source
VHDL Verification Methodology (VVM) library of VHDL packages has
emerged that provides VHDL with some of the capabilities of SV with
OVM/UVM.

Andy

Article: 153326
Subject: regarding tft controller
From: "vlsi330" <gnanadeepika423@n_o_s_p_a_m.gmail.com>
Date: Wed, 01 Feb 2012 09:12:51 -0600
Links: << >>  << T >>  << A >>
hi,


can you please tell me how to add xps_tft controller ip core in xilinx edk
for spartan 3e fpga board platform?





	   
					
---------------------------------------		
Posted through http://www.FPGARelated.com

Article: 153327
Subject: Difference between Xilinx isim and modelsim
From: "guenter" <guenter.wolpert@n_o_s_p_a_m.orsys.de>
Date: Wed, 01 Feb 2012 09:13:00 -0600
Links: << >>  << T >>  << A >>
Is it allowed to pass a member of a std_logic_vector to the rising_edge
function?
When doing this, isim doen's detect all changes, while modelsim does.
The code below toggles bits of a 3-bit vector. Bit 1 of the vector is
checked for rising and falling edges by directly passing vec(1) to
reising_edge(). Bit 1 is also assigned to a scalar signal which is also
checked for edges:

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;

entity top is
end top;

architecture Behavioral of top is
  signal vec : std_logic_vector(2 downto 0) := "000";
  signal sig : std_logic                    := '0';
begin
  gen: process is
  begin
    vec <= "000" after 1 ns,
           "010" after 2 ns,
           "000" after 3 ns,
           "101" after 4 ns,
           "111" after 5 ns,
           "101" after 6 ns;
    wait for           6 ns;
    wait;
  end process gen;

  sig <= vec(1);

  watch_bit1a: process is
  begin
    wait until rising_edge(vec(1));
    report "rising edge on vec" severity note;
  end process watch_bit1a;

  watch_bit1b: process is
  begin
    wait until falling_edge(vec(1));
    report "falling edge on vec" severity note;
  end process watch_bit1b;

  watch_bit1c: process is
  begin
    wait until rising_edge(sig);
    report "rising edge on sig" severity note;
  end process watch_bit1c;

  watch_bit1d: process is
  begin
    wait until falling_edge(sig);
    report "falling edge on sig" severity note;
  end process watch_bit1d;
  
end Behavioral;

Simulating with modelsim reports all edges:
# ** Note: rising edge on vec Time: 2 ns  Iteration: 0  Instance: /top
# ** Note: rising edge on sig Time: 2 ns  Iteration: 1  Instance: /top
# ** Note: falling edge on vec Time: 3 ns  Iteration: 0  Instance: /top
# ** Note: falling edge on sig Time: 3 ns  Iteration: 1  Instance: /top
# ** Note: rising edge on vec Time: 5 ns  Iteration: 0  Instance: /top
# ** Note: rising edge on sig Time: 5 ns  Iteration: 1  Instance: /top
# ** Note: falling edge on vec Time: 6 ns  Iteration: 0  Instance: /top
# ** Note: falling edge on sig Time: 6 ns  Iteration: 1  Instance: /top

isim only reports those edges that make bit 1 different from the remaining
bits:

at 2 ns: Note: rising edge on vec (/top/).
at 2 ns(1): Note: rising edge on sig (/top/).
at 3 ns(1): Note: falling edge on sig (/top/).
at 5 ns(1): Note: rising edge on sig (/top/).
at 6 ns: Note: falling edge on vec (/top/).
at 6 ns(1): Note: falling edge on sig (/top/).

I can imagine that passing vec(1) to rising_edge is not really allowed
since it is of type std_logic_vector(1 downto 1) instead of std_logic.
What do you think about this? Is isim more accurate with the standard or is
is simply a bug?

regards
Guenter

	   
					
---------------------------------------		
Posted through http://www.FPGARelated.com

Article: 153328
Subject: Re: =?ISO-8859-1?Q?Post=2Dsynth=E8se_simulation?=
From: "guenter" <guenter.wolpert@n_o_s_p_a_m.orsys.de>
Date: Wed, 01 Feb 2012 09:13:15 -0600
Links: << >>  << T >>  << A >>
>Hello,
>
>I want to run a post-synthesis simulation. I don't find where to
>choose the sources (Netlist post-synthesis) to launch the needed
>simulation from ISE 13.3.
>
>Does someone know how to do it ? I need some help.
>
>Simulator: ModelSim SE-64 10.0d
>Xilinx tools: 13.3
>
>Molka
>PhD student
>

>From within ISE13.3's Project Manager, having a VHDL project:
Select the "Design" tab
In the Hierarchy pane (upper area) select the top-level unit of your
design
In the Processes pane (lower area), expand "Synthesize - XST" and
double-click on "Generate Post Synthesis Simulation Model"
This generates a folder within your project named netgen/synthesis and
within this folder, there is a file named <top>_synthesis, where <top> is
the name of your project's top level VHDL file.
This file must be compiled with ModelSim, together with your testbench.
I'm not sure what is necessary to launch ModelSim directly from ISE,
because I'm always using separate scripts for ModelSim.

regards
Guenter


	   
					
---------------------------------------		
Posted through http://www.FPGARelated.com

Article: 153329
Subject: Re: Design Notation VHDL or Verilog?
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Wed, 1 Feb 2012 15:20:49 +0000 (UTC)
Links: << >>  << T >>  << A >>
rickman <gnuarm@gmail.com> wrote:

(snip, I wrote)
>> If it is useful in ASIC/FPGA designs, wouldn't it also be even more
>> useful in software running on commonly (or not so commonly) available
>> processors? Yet none have bothered to implement it.

>> It could be done in software, but none of the popular languages
>> have any support for fixed point non-integer arithmetic.

>> It was one of the fun things I did in PL/I so many years ago, though.

> I don't really follow your reasoning here.  The idea of the fixed
> point package is to allow a user to work with fixed point values in
> VHDL without having to manually deal with all the details.  You seem
> to be saying that fixed point arithmetic is of no value because it is
> not implemented in software.  I don't see how software has anything to
> do with it.  

The suggestion, which could be wrong, was that if scaled fixed point
is useful in an HDL, it should also be useful in non-HDL applications.

The designers of PL/I believed that it would be, but designers of
other languages don't seem to have believed in it enough to include.
Personally, I liked PL/I 40 years ago, and believe that it should
have done better than it did, and scaled fixed point was a feature
that I liked to use.
 
> Software is implemented on programmable hardware and for
> the most part is designed for the most common platforms and does not
> do a good job of utilizing unusual hardware effectively.  I don't see
> how fixed point arithmetic would be of any value on conventional
> integer oriented hardware.

Well, the designers of conventional hardware might argue that they
support scaled fixed point as long as you keep track of the radix
point yourself. It is, then, up to software to make it easier for
programmers by helping them keep track of the radix point.

I believe that in addition to PL/I that there are some other less
commonly used languages, maybe ADA, that support it.

It can be done in languages like Pascal and C, TeX and Metafont do it.
Multiplication is complicated, though, because most HLL's don't supply
the high half of the product in multiply, or allow a double length
dividend in division. Knuth wrote the Pascal routines for TeX and MF
to do it, and suggests that implementations rewrite them in assembler
for higher performance. 

>> >> Yes you can do fixed point with different positions of the radix
>> >> point, PL/I and a small number of other languages do, but on hardware
>> >> that doesn't keep track of the radix point.
>> > You seem to be thinking only of CPU designs.  HDL is used for a lot
>> > more than CPU designs.

>> Well, more than CPU designs, I work on systolic arrays, but even
>> there so, it doesn't seem likely to help much. Will it generate
>> pipelined arithmetic units? Of all the things that I have to think
>> about in logic design, the position of the binary point is one of
>> the smallest.

> Asking about pipelining with fixed point numbers is like asking if
> boxes help your car.  Sure, if you want to carry oranges in a car you
> can use boxes, but you don't have to.  If your systolic arrays use
> fixed point arithmetic then the fixed point package will help your
> systolic arrays.  If you don't care about the precision of your
> calculations then don't bother using the fixed point package.

In the past, the tools would not synthesize division with a
non-constant divisor. If you generate the division logic yourself,
you can add in any pipelining needed. (I did one once, and not for
a systolic array.) With the hardware block multipliers in many FPGAs,
it may not be necessary to generate pipelined multipliers, but many
will want pipelined divide.

(snip)
>> And I don't expect the processor to help me figure out which
>> ones those are.

> What processor?  You have lost me here.  A multiply provides a result
> with as many bits as the sum of the bits in the inputs.  But this many
> bits are seldom needed.  The fixed point package makes it a little
> easier to work with the variety of formats that typically are used in
> a case like this.  No, the package does not figure out what you want
> to do.  You have to know that, but it does help hide some of the
> details.

Yes, multiply generates a wide product, and most processors with
a multiply instruction supply all the bits. But most HLLs don't
provide a way to get those bits. For scaled fixed point, you 
shift as appropriate to get the needed product bits out.

>> >> > Similarly, VHDL also has a floating point package that handles
>> >> > arbitrary sizes of data and exponent.

>> >> And that is synthesizable by anyone?

>> > I haven't looked at the floating point package, but if it is described
>> > in terms of basic operators in VHDL it should be synthesizable in
>> > every tool.  That is the point of overloading.  You can define a
>> > multiply operator for real data in terms of the basic building blocks
>> > and this is now a part of the language.  VHDL is extensible that way.

>> Again, does it synthesize pipelined arithmetic units? If not, then
>> they aren't much use to anything I would work on. If I do it in
>> an FPGA that is because normal processors aren't fast enough.

>> But the big problem with floating point is that it takes too much logic.
>> The barrel shifter for pre/post normalization for an adder is huge.
>> (The actual addition, not so huge.)

> Nope, the fixed point package does not design pipelines, it isn't a
> pipeline package.  Yup, the barrel shifter is as large as a
> multiplier.  So what is your point?  The purpose of the package is to
> allow you to design floating point arithmetic without designing all
> the details of the logic.  It isn't going to reduce the size of the
> logic.

The point I didn't make very well was that floating point is still
not very usable in FPGA designs, as it is still too big. If a 
design can be pipelined, then it has a chance to be fast enough
to be useful.

Historically, FPGA based hardware to accelerate scientific
programming has not done very well in the marketplace. People
keep trying, though, and I would like to see someone succeed.

-- glen


Article: 153330
Subject: Re: Design Notation VHDL or Verilog?
From: gtwrek@sonic.net (Mark Curry)
Date: Wed, 1 Feb 2012 17:38:04 +0000 (UTC)
Links: << >>  << T >>  << A >>
In article <5a603ea6-876d-4f20-a35f-79dcfe6ebd1e@h3g2000yqe.googlegroups.com>,
Andy  <jonesandy@comcast.net> wrote:
>On Jan 31, 2:00 pm, Petter Gustad <newsmailco...@gustad.com> wrote:
>> Andy <jonesa...@comcast.net> writes:
>> > there is negligible advantage to using VHDL over verilog. It is at
>> > higher levels of abstraction, where design productivity is maximized,
>> > that VHDL shines.
>>
>> It depends of the Verilog version in question and the type of design.
>> SystemVerilog has a much higher level of abstraction when it comes to
>> verification and testbench code. Especially when using libraries like
>> UVM etc.
>>
>> //Petter
>>
>> --
>> .sig removed by request.
>
>AFAIK, the synthesizable subset of SV is pretty much plane old
>verilog, with all its warts.

Nope.  SV synthesis is quite powerful.  Arrays, and structs are first class 
citizens in SV and can be members of ports.  Further most tools handle 
interfaces now pretty well in synthesis.  There's quite a benefit to using
these in your RTL.  

Trying to resist making this a language war, but I see little advantage 
one or the other with regard to a "higher level of abstraction".  You can 
probably achieve the same with either language.

--Mark



Article: 153331
Subject: Re: Difference between Xilinx isim and modelsim
From: Alan Fitch <apf@invalid.invalid>
Date: Wed, 01 Feb 2012 23:08:29 +0000
Links: << >>  << T >>  << A >>
On 01/02/12 15:13, guenter wrote:
> Is it allowed to pass a member of a std_logic_vector to the rising_edge
> function?
> When doing this, isim doen's detect all changes, while modelsim does.
> The code below toggles bits of a 3-bit vector. Bit 1 of the vector is
> checked for rising and falling edges by directly passing vec(1) to
> reising_edge(). Bit 1 is also assigned to a scalar signal which is also
> checked for edges:
> 
<snipped code>

> Simulating with modelsim reports all edges:
> # ** Note: rising edge on vec Time: 2 ns  Iteration: 0  Instance: /top
> # ** Note: rising edge on sig Time: 2 ns  Iteration: 1  Instance: /top
> # ** Note: falling edge on vec Time: 3 ns  Iteration: 0  Instance: /top
> # ** Note: falling edge on sig Time: 3 ns  Iteration: 1  Instance: /top
> # ** Note: rising edge on vec Time: 5 ns  Iteration: 0  Instance: /top
> # ** Note: rising edge on sig Time: 5 ns  Iteration: 1  Instance: /top
> # ** Note: falling edge on vec Time: 6 ns  Iteration: 0  Instance: /top
> # ** Note: falling edge on sig Time: 6 ns  Iteration: 1  Instance: /top
> 
> isim only reports those edges that make bit 1 different from the remaining
> bits:
> 
> at 2 ns: Note: rising edge on vec (/top/).
> at 2 ns(1): Note: rising edge on sig (/top/).
> at 3 ns(1): Note: falling edge on sig (/top/).
> at 5 ns(1): Note: rising edge on sig (/top/).
> at 6 ns: Note: falling edge on vec (/top/).
> at 6 ns(1): Note: falling edge on sig (/top/).
>

I would say that Modelsim is correct.

> I can imagine that passing vec(1) to rising_edge is not really allowed
> since it is of type std_logic_vector(1 downto 1) instead of std_logic.

That's not correct. An element of a std_logic_vector is just std_logic.
vec(1) is std_logic. vec(1 downto 1) is a 1 element wide std_logic_vector.

> What do you think about this? Is isim more accurate with the standard or is
> is simply a bug?

I believe it is a bug in Isim.

I'm basing this on reading section 8.1 of the VHDL 2002 standard "wait
statement" where it says there is an implicit sensitivity list derived from

- A function call, apply this rule to every actual designator in every
parameter association

and

— An indexed name whose prefix denotes a signal, add the longest static
prefix of the name to the
sensitivity set and apply this rule to all expressions in the indexed name

As you're using vec(1), the longest static prefix is behavioural.vec(1),
so vec(1) should trigger the wait until whenever it changes.

regards
Alan


-- 
Alan Fitch

Article: 153332
Subject: Re: Design Notation VHDL or Verilog?
From: Kim Enkovaara <kim.enkovaara@iki.fi>
Date: Thu, 02 Feb 2012 09:03:55 +0200
Links: << >>  << T >>  << A >>
On 1.2.2012 17:20, glen herrmannsfeldt wrote:

> The suggestion, which could be wrong, was that if scaled fixed point
> is useful in an HDL, it should also be useful in non-HDL applications.

It is very useful in some applications, for example in DSP processing.
There are DSP processors that support fixed point arithmetic. Also many
"fixed" FPGA based DSP algorithms use different resolution fixed point
numbers all around. Standard package to help that kind of design is
very welcome. Previously they have been proprietary packages.

> The point I didn't make very well was that floating point is still
> not very usable in FPGA designs, as it is still too big. If a
> design can be pipelined, then it has a chance to be fast enough
> to be useful.

Floating point might be too heavy in many applications, but on the
other hand if the number range and operators are limited to what is
needed it is not that big anymore, when comparing to the leading edge
FPGA sizes. For example leading edge FPGA can support ~2000 single
precision floating point multipliers.

> Historically, FPGA based hardware to accelerate scientific
> programming has not done very well in the marketplace. People
> keep trying, though, and I would like to see someone succeed.

You are fixed to quite narrow band of FPGA applications. FPGAs
are almost everywhere, for example they are very popular in mobile
network basestations. And also telecom equipment use some forms of
floating/fixed point arithmetic often in policing and shaping
for example.

--Kim

Article: 153333
Subject: Re: Design Notation VHDL or Verilog?
From: rickman <gnuarm@gmail.com>
Date: Wed, 1 Feb 2012 23:59:23 -0800 (PST)
Links: << >>  << T >>  << A >>
On Feb 1, 10:20=A0am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> rickman <gnu...@gmail.com> wrote:
>
> (snip, I wrote)
>
> >> If it is useful in ASIC/FPGA designs, wouldn't it also be even more
> >> useful in software running on commonly (or not so commonly) available
> >> processors? Yet none have bothered to implement it.
> >> It could be done in software, but none of the popular languages
> >> have any support for fixed point non-integer arithmetic.
> >> It was one of the fun things I did in PL/I so many years ago, though.
> > I don't really follow your reasoning here. =A0The idea of the fixed
> > point package is to allow a user to work with fixed point values in
> > VHDL without having to manually deal with all the details. =A0You seem
> > to be saying that fixed point arithmetic is of no value because it is
> > not implemented in software. =A0I don't see how software has anything t=
o
> > do with it.
>
> The suggestion, which could be wrong, was that if scaled fixed point
> is useful in an HDL, it should also be useful in non-HDL applications.
>
> The designers of PL/I believed that it would be, but designers of
> other languages don't seem to have believed in it enough to include.
> Personally, I liked PL/I 40 years ago, and believe that it should
> have done better than it did, and scaled fixed point was a feature
> that I liked to use.

I have no idea what bearing a forty year old computer language has to
do with this issue.


> > Software is implemented on programmable hardware and for
> > the most part is designed for the most common platforms and does not
> > do a good job of utilizing unusual hardware effectively. =A0I don't see
> > how fixed point arithmetic would be of any value on conventional
> > integer oriented hardware.
>
> Well, the designers of conventional hardware might argue that they
> support scaled fixed point as long as you keep track of the radix
> point yourself. It is, then, up to software to make it easier for
> programmers by helping them keep track of the radix point.
>
> I believe that in addition to PL/I that there are some other less
> commonly used languages, maybe ADA, that support it.
>
> It can be done in languages like Pascal and C, TeX and Metafont do it.
> Multiplication is complicated, though, because most HLL's don't supply
> the high half of the product in multiply, or allow a double length
> dividend in division. Knuth wrote the Pascal routines for TeX and MF
> to do it, and suggests that implementations rewrite them in assembler
> for higher performance.

Again, I don't know why you are dragging high level languages into
this.  What HLLs do has nothing to do with the utility of features of
HDLs.


> >> >> Yes you can do fixed point with different positions of the radix
> >> >> point, PL/I and a small number of other languages do, but on hardwa=
re
> >> >> that doesn't keep track of the radix point.
> >> > You seem to be thinking only of CPU designs. =A0HDL is used for a lo=
t
> >> > more than CPU designs.
> >> Well, more than CPU designs, I work on systolic arrays, but even
> >> there so, it doesn't seem likely to help much. Will it generate
> >> pipelined arithmetic units? Of all the things that I have to think
> >> about in logic design, the position of the binary point is one of
> >> the smallest.
> > Asking about pipelining with fixed point numbers is like asking if
> > boxes help your car. =A0Sure, if you want to carry oranges in a car you
> > can use boxes, but you don't have to. =A0If your systolic arrays use
> > fixed point arithmetic then the fixed point package will help your
> > systolic arrays. =A0If you don't care about the precision of your
> > calculations then don't bother using the fixed point package.
>
> In the past, the tools would not synthesize division with a
> non-constant divisor. If you generate the division logic yourself,
> you can add in any pipelining needed. (I did one once, and not for
> a systolic array.) With the hardware block multipliers in many FPGAs,
> it may not be necessary to generate pipelined multipliers, but many
> will want pipelined divide.

Oh, I see why you are talking about pipelining now.  I don't think the
package provides pipelining.  But that does not eliminate the utility
of the package.  It may not be useful to you if it isn't pipelined,
but many other apps can use non-pipelined arithmetic just fine.


> >> And I don't expect the processor to help me figure out which
> >> ones those are.
> > What processor? =A0You have lost me here. =A0A multiply provides a resu=
lt
> > with as many bits as the sum of the bits in the inputs. =A0But this man=
y
> > bits are seldom needed. =A0The fixed point package makes it a little
> > easier to work with the variety of formats that typically are used in
> > a case like this. =A0No, the package does not figure out what you want
> > to do. =A0You have to know that, but it does help hide some of the
> > details.
>
> Yes, multiply generates a wide product, and most processors with
> a multiply instruction supply all the bits. But most HLLs don't
> provide a way to get those bits. For scaled fixed point, you
> shift as appropriate to get the needed product bits out.

I have no interest in what HLLs do.  I use HDLs to design hardware and
I determine how the HDL shifts or rounds or whatever it is that I
need.


> >> >> > Similarly, VHDL also has a floating point package that handles
> >> >> > arbitrary sizes of data and exponent.
> >> >> And that is synthesizable by anyone?
> >> > I haven't looked at the floating point package, but if it is describ=
ed
> >> > in terms of basic operators in VHDL it should be synthesizable in
> >> > every tool. =A0That is the point of overloading. =A0You can define a
> >> > multiply operator for real data in terms of the basic building block=
s
> >> > and this is now a part of the language. =A0VHDL is extensible that w=
ay.
> >> Again, does it synthesize pipelined arithmetic units? If not, then
> >> they aren't much use to anything I would work on. If I do it in
> >> an FPGA that is because normal processors aren't fast enough.
> >> But the big problem with floating point is that it takes too much logi=
c.
> >> The barrel shifter for pre/post normalization for an adder is huge.
> >> (The actual addition, not so huge.)
> > Nope, the fixed point package does not design pipelines, it isn't a
> > pipeline package. =A0Yup, the barrel shifter is as large as a
> > multiplier. =A0So what is your point? =A0The purpose of the package is =
to
> > allow you to design floating point arithmetic without designing all
> > the details of the logic. =A0It isn't going to reduce the size of the
> > logic.
>
> The point I didn't make very well was that floating point is still
> not very usable in FPGA designs, as it is still too big. If a
> design can be pipelined, then it has a chance to be fast enough
> to be useful.

Whether floating point is too big depends on your app.  Pipelining
does not reduce the size of a design and can only be used in apps that
can tolerate the pipeline latency.  You speak as if your needs are the
same as everyone else's.


> Historically, FPGA based hardware to accelerate scientific
> programming has not done very well in the marketplace. People
> keep trying, though, and I would like to see someone succeed.
>
> -- glen

Scientific programming is not the only app for FPGAs and fixed or
floating point arithmetic.  Fixed point arithmetic is widely used in
signal processing apps and floating point is often used when fixed
point is too limiting.

Rick

Article: 153334
Subject: Re: Design Notation VHDL or Verilog?
From: Andy <jonesandy@comcast.net>
Date: Thu, 2 Feb 2012 07:14:45 -0800 (PST)
Links: << >>  << T >>  << A >>
On Feb 1, 11:38=A0am, gtw...@sonic.net (Mark Curry) wrote:>
> Nope. =A0SV synthesis is quite powerful. =A0Arrays, and structs are first=
 class
> citizens in SV and can be members of ports. =A0Further most tools handle
> interfaces now pretty well in synthesis. =A0There's quite a benefit to us=
ing
> these in your RTL.
>
> Trying to resist making this a language war, but I see little advantage
> one or the other with regard to a "higher level of abstraction". =A0You c=
an
> probably achieve the same with either language.
>
> --Mark- Hide quoted text -
>
> - Show quoted text -

Mark,

Sounds like SV synthesis has improved quite a bit, which is good to
know, and good for the industry. Which tools support these features (I
assume at least Synplify)? As soon as someone standardizes a library
to do fixed point in SV, then it will be able to do what VHDL does :)

I'm not that familiar with SV or SV interfaces in particular. There's
one thing I wish VHDL would do, and that is allow an interface (a
record port with different directionality associated with each element
of the record) on a component/entity/subprogram.

From the abstraction point of view, I think you are right, SV and VHDL
synthesis are pretty close, with specific features being edges in
each's favor.

However, I (and I believe the OP) was not considering SV as just a
version of verilog though. Perhaps that is something that is beginning
to change (the realization that there is another viable choice besides
vanilla verilog and vhdl for synthesis.)

Andy




Article: 153335
Subject: Re: Difference between Xilinx isim and modelsim
From: Andy <jonesandy@comcast.net>
Date: Thu, 2 Feb 2012 08:47:06 -0800 (PST)
Links: << >>  << T >>  << A >>
On Feb 1, 5:08=A0pm, Alan Fitch <a...@invalid.invalid> wrote:
> On 01/02/12 15:13, guenter wrote:> Is it allowed to pass a member of a st=
d_logic_vector to the rising_edge
> > function?
> > When doing this, isim doen's detect all changes, while modelsim does.
> > The code below toggles bits of a 3-bit vector. Bit 1 of the vector is
> > checked for rising and falling edges by directly passing vec(1) to
> > reising_edge(). Bit 1 is also assigned to a scalar signal which is also
> > checked for edges:
>
> <snipped code>
>
>
>
>
>
> > Simulating with modelsim reports all edges:
> > # ** Note: rising edge on vec Time: 2 ns =A0Iteration: 0 =A0Instance: /=
top
> > # ** Note: rising edge on sig Time: 2 ns =A0Iteration: 1 =A0Instance: /=
top
> > # ** Note: falling edge on vec Time: 3 ns =A0Iteration: 0 =A0Instance: =
/top
> > # ** Note: falling edge on sig Time: 3 ns =A0Iteration: 1 =A0Instance: =
/top
> > # ** Note: rising edge on vec Time: 5 ns =A0Iteration: 0 =A0Instance: /=
top
> > # ** Note: rising edge on sig Time: 5 ns =A0Iteration: 1 =A0Instance: /=
top
> > # ** Note: falling edge on vec Time: 6 ns =A0Iteration: 0 =A0Instance: =
/top
> > # ** Note: falling edge on sig Time: 6 ns =A0Iteration: 1 =A0Instance: =
/top
>
> > isim only reports those edges that make bit 1 different from the remain=
ing
> > bits:
>
> > at 2 ns: Note: rising edge on vec (/top/).
> > at 2 ns(1): Note: rising edge on sig (/top/).
> > at 3 ns(1): Note: falling edge on sig (/top/).
> > at 5 ns(1): Note: rising edge on sig (/top/).
> > at 6 ns: Note: falling edge on vec (/top/).
> > at 6 ns(1): Note: falling edge on sig (/top/).
>
> I would say that Modelsim is correct.
>
> > I can imagine that passing vec(1) to rising_edge is not really allowed
> > since it is of type std_logic_vector(1 downto 1) instead of std_logic.
>
> That's not correct. An element of a std_logic_vector is just std_logic.
> vec(1) is std_logic. vec(1 downto 1) is a 1 element wide std_logic_vector=
.
>
> > What do you think about this? Is isim more accurate with the standard o=
r is
> > is simply a bug?
>
> I believe it is a bug in Isim.
>
> I'm basing this on reading section 8.1 of the VHDL 2002 standard "wait
> statement" where it says there is an implicit sensitivity list derived fr=
om
>
> - A function call, apply this rule to every actual designator in every
> parameter association
>
> and
>
> =97 An indexed name whose prefix denotes a signal, add the longest static
> prefix of the name to the
> sensitivity set and apply this rule to all expressions in the indexed nam=
e
>
> As you're using vec(1), the longest static prefix is behavioural.vec(1),
> so vec(1) should trigger the wait until whenever it changes.

I believe that is not correct. The LSP is ...vec (the whole vector).
So processes that appear to be sensitive to one bit of a vector are
also sensitive to other bits in the vector, but the
rising_edge(vec(1)) function invoked by the wait statement is only
looking at vec(1).

Nevertheless, it looks like isim has a bug.

Andy


Article: 153336
Subject: Re: Design Notation VHDL or Verilog?
From: gtwrek@sonic.net (Mark Curry)
Date: Thu, 2 Feb 2012 17:17:17 +0000 (UTC)
Links: << >>  << T >>  << A >>
In article <315bd1eb-26b4-4164-b503-9e2146ea7499@l14g2000vbe.googlegroups.com>,
Andy  <jonesandy@comcast.net> wrote:
>On Feb 1, 11:38 am, gtw...@sonic.net (Mark Curry) wrote:>
>> Nope.  SV synthesis is quite powerful.  Arrays, and structs are first class
>> citizens in SV and can be members of ports.  Further most tools handle
>> interfaces now pretty well in synthesis.  There's quite a benefit to using
>> these in your RTL.
>>
>> Trying to resist making this a language war, but I see little advantage
>> one or the other with regard to a "higher level of abstraction".  You can
>> probably achieve the same with either language.
>>
>> --Mark- Hide quoted text -
>>
>> - Show quoted text -
>
>Mark,
>
>Sounds like SV synthesis has improved quite a bit, which is good to
>know, and good for the industry. Which tools support these features (I
>assume at least Synplify)? As soon as someone standardizes a library
>to do fixed point in SV, then it will be able to do what VHDL does :)

I've used Mentor Precision (for FPGAs), and am currently eval'ing Synplify.
Synopsys has supported SV for a while now for ASICs.

I've followed this thread some with some confusion as to what this 
"fixed point library" is and/or does.  I do all kinds of DSP math
apps in verilog, with variable precision.  Yes, you need
to pay attention scaling along the data path.  I'm unsure how
a library can really help this.  

Although I tend to want to be very precise - drawing out
my data path with all truncation/rounding/etc specifically
called out.  I suppose if some library somehow standardized
this... Hmm, need to think about it.  Can't see how it'd offer
much benefit.

--Mark
 



Article: 153337
Subject: Re: Design Notation VHDL or Verilog?
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Thu, 2 Feb 2012 18:36:38 +0000 (UTC)
Links: << >>  << T >>  << A >>
rickman <gnuarm@gmail.com> wrote:

(snip, I wrote)
>> The suggestion, which could be wrong, was that if scaled fixed point
>> is useful in an HDL, it should also be useful in non-HDL applications.

>> The designers of PL/I believed that it would be, but designers of
>> other languages don't seem to have believed in it enough to include.
>> Personally, I liked PL/I 40 years ago, and believe that it should
>> have done better than it did, and scaled fixed point was a feature
>> that I liked to use.

> I have no idea what bearing a forty year old computer language has to
> do with this issue.

As far as I know, currently if something can be done either with
an FPGA or a small microcontroller, or even DSP processor,
the choice is often for the processor. With the price of smaller
FPGAs coming down, that might change, but also there are more
people who know how to program such processors than HDL design.

Given that, I would expect some overlap between FPGA/HDL
applications and processor/software applications, depending
on the speed required at the time. Some applications might 
initially be developed in software, debugged and tested, before
moving to an FPGA/HDL implementation. 

Maybe the question isn't whether scaled fixed point is useful
in HDL, but why isn't it useful, enough that people ask for
it in an HLL, in software! Why no scaled fixed point datatype
in C or Java? 

(snip)
>> > Software is implemented on programmable hardware and for
>> > the most part is designed for the most common platforms and does not
>> > do a good job of utilizing unusual hardware effectively.  I don't see
>> > how fixed point arithmetic would be of any value on conventional
>> > integer oriented hardware.

>> Well, the designers of conventional hardware might argue that they
>> support scaled fixed point as long as you keep track of the radix
>> point yourself. It is, then, up to software to make it easier for
>> programmers by helping them keep track of the radix point.

>> I believe that in addition to PL/I that there are some other less
>> commonly used languages, maybe ADA, that support it.

(snip)
> Again, I don't know why you are dragging high level languages into
> this.  What HLLs do has nothing to do with the utility of features of
> HDLs.

As I said above, if for no other reason, then to test and debug
the applications that will later be implemented in scaled fixed
point VHDL. 

Well, there are two applications that D. Knuth believes should
always be done in fixed point: finance and typesetting. As we know,
it is also often used in DSP, but Knuth likely didn't (doesn't)
do much DSP work. (It was some years ago he said that.)

(snip)
>> In the past, the tools would not synthesize division with a
>> non-constant divisor. If you generate the division logic yourself,
>> you can add in any pipelining needed. (I did one once, and not for
>> a systolic array.) With the hardware block multipliers in many FPGAs,
>> it may not be necessary to generate pipelined multipliers, but many
>> will want pipelined divide.

> Oh, I see why you are talking about pipelining now.  I don't think the
> package provides pipelining.  But that does not eliminate the utility
> of the package.  It may not be useful to you if it isn't pipelined,
> but many other apps can use non-pipelined arithmetic just fine.

Maybe so. In the past, the cost and size kept people away from 
using FPGAs when conventional processors would do. With the lower
prices of smaller (but not so small) FPGAs that might change.

(snip)
>> Yes, multiply generates a wide product, and most processors with
>> a multiply instruction supply all the bits. But most HLLs don't
>> provide a way to get those bits. For scaled fixed point, you
>> shift as appropriate to get the needed product bits out.

> I have no interest in what HLLs do.  I use HDLs to design hardware and
> I determine how the HDL shifts or rounds or whatever it is that I
> need.

Yes. If I determine the shifts and rounds, then I don't need VHDL
to do it for me. I can do it using integer arithmetic in C,
(easier if I can get all the product bits from multiply), and
in HDL.

(snip)
>> The point I didn't make very well was that floating point is still
>> not very usable in FPGA designs, as it is still too big. If a
>> design can be pipelined, then it has a chance to be fast enough
>> to be useful.

> Whether floating point is too big depends on your app.  Pipelining
> does not reduce the size of a design and can only be used in apps that
> can tolerate the pipeline latency.  You speak as if your needs are the
> same as everyone else's.

Well, the desire to run almost any computational problem faster
certainly isn't unique to me. But yes, I know the problems that
I work on better than those that I don't. But if you can't process
data faster than a medium sized conventional processor, there
won't be much demand, unless the cost (including amortized
development) is less.

>> Historically, FPGA based hardware to accelerate scientific
>> programming has not done very well in the marketplace. People
>> keep trying, though, and I would like to see someone succeed.

> Scientific programming is not the only app for FPGAs and fixed or
> floating point arithmetic.  Fixed point arithmetic is widely used in
> signal processing apps and floating point is often used when fixed
> point is too limiting.

True, but for floating point number crunching, in the teraflop
or petaflop scale, it is scientific programming. With the size
and speed of floating point in an FPGA, one could instead
use really wide fixed point. Given the choice between 32 bit
floating point and 64 bit or more fixed point for DSP applications,
which one is more useful?

In general, fixed point is a better choice for quantities
with an absolute (not size dependent) uncertainty, floating
point for quantities with a relative uncertainty.

-- glen

Article: 153338
Subject: Virtex6HXT PCIe doesn't come up to Gen2 on Sandy Bridge systems
From: General Schvantzkoph <schvantzkoph@yahoo.com>
Date: 2 Feb 2012 18:53:56 GMT
Links: << >>  << T >>  << A >>
I have an 8X PCIe core in a Virtex6HXT (version 2.5, the latest in 13.4). 
It's configured for Gen2 but it's coming up Gen1. lspci -vvv reports that 
both the core and the board are Gen2 capable. I've looked at the PIPE 
interface in Chipscope and the board is advertising Gen1 speeds only. 
Xilinx support said that there is an issue with Sandy Bridge chipsets and 
that something had to be done in the driver, however they didn't know the 
specifics. Has anyone gotten V6 to run at 5GT on a Sandy Bridge? What did 
you have to do to make it work?

lspci -vvv

01:00.0 RAM memory: Xilinx Corporation Device 6028
	Subsystem: Xilinx Corporation Device 0007
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 4 bytes
	Interrupt: pin A routed to IRQ 12
	Region 0: Memory at e0000000 (64-bit, non-prefetchable) [size=256M]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1+,D2+,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [48] MSI: Enable- Count=1/1 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [60] Express (v2) Endpoint, MSI 01
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <64ns, L1 unlimited
			ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
		DevCtl:	Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
			RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 5GT/s, Width x8, ASPM L0s, Latency L0 unlimited, L1 unlimited
			ClockPM- Surprise- LLActRep- BwNot-
		LnkCtl:	ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 2.5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range B, TimeoutDis-
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-
		LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-, Selectable De-emphasis: -6dB
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -3.5dB
	Capabilities: [100] Device Serial Number 00-00-00-00-00-00-00-00

Article: 153339
Subject: Re: Difference between Xilinx isim and modelsim
From: Alan Fitch <apf@invalid.invalid>
Date: Fri, 03 Feb 2012 00:16:56 +0000
Links: << >>  << T >>  << A >>
On 02/02/12 16:47, Andy wrote:
> On Feb 1, 5:08 pm, Alan Fitch <a...@invalid.invalid> wrote:
<snip>
>>
>> I believe it is a bug in Isim.
>>
>> I'm basing this on reading section 8.1 of the VHDL 2002 standard "wait
>> statement" where it says there is an implicit sensitivity list derived from
>>
>> - A function call, apply this rule to every actual designator in every
>> parameter association
>>
>> and
>>
>> — An indexed name whose prefix denotes a signal, add the longest static
>> prefix of the name to the
>> sensitivity set and apply this rule to all expressions in the indexed name
>>
>> As you're using vec(1), the longest static prefix is behavioural.vec(1),
>> so vec(1) should trigger the wait until whenever it changes.
> 
> I believe that is not correct. The LSP is ...vec (the whole vector).
> So processes that appear to be sensitive to one bit of a vector are
> also sensitive to other bits in the vector, but the
> rising_edge(vec(1)) function invoked by the wait statement is only
> looking at vec(1).
>

Hi Andy,
 I was (perhaps foolishly) assuming that because 1 is a constant, the
slice could be statically determined - but you could well be correct, I
really need to go away and read the definition of longest static prefix
again

> Nevertheless, it looks like isim has a bug.
> 

I agree,

regards

Alan

-- 
Alan Fitch

Article: 153340
Subject: Re: Difference between Xilinx isim and modelsim
From: Alan Fitch <apf@invalid.invalid>
Date: Fri, 03 Feb 2012 00:48:23 +0000
Links: << >>  << T >>  << A >>
On 03/02/12 00:16, Alan Fitch wrote:
> On 02/02/12 16:47, Andy wrote:
>> On Feb 1, 5:08 pm, Alan Fitch <a...@invalid.invalid> wrote:
> <snip>
>>>
>>> I believe it is a bug in Isim.
>>>
>>> I'm basing this on reading section 8.1 of the VHDL 2002 standard "wait
>>> statement" where it says there is an implicit sensitivity list derived from
>>>
>>> - A function call, apply this rule to every actual designator in every
>>> parameter association
>>>
>>> and
>>>
>>> — An indexed name whose prefix denotes a signal, add the longest static
>>> prefix of the name to the
>>> sensitivity set and apply this rule to all expressions in the indexed name
>>>
>>> As you're using vec(1), the longest static prefix is behavioural.vec(1),
>>> so vec(1) should trigger the wait until whenever it changes.
>>
>> I believe that is not correct. The LSP is ...vec (the whole vector).
>> So processes that appear to be sensitive to one bit of a vector are
>> also sensitive to other bits in the vector, but the
>> rising_edge(vec(1)) function invoked by the wait statement is only
>> looking at vec(1).
>>
> 
> Hi Andy,
>  I was (perhaps foolishly) assuming that because 1 is a constant, the
> slice could be statically determined - but you could well be correct, I
> really need to go away and read the definition of longest static prefix
> again
>

Ok, here's the text from 1076-2002

"Furthermore, a name is said to be a locally static name if and only if
one of the following conditions hold:

...

— The name is an indexed name whose prefix is a locally static name, and
every expression that appears as part of the name is a locally static
expression.

— The name is a slice name whose prefix is a locally static name and
whose discrete range is a locally static discrete range.

A static signal name is a static name that denotes a signal. The longest
static prefix of a signal name is the name itself, if the name is a
static signal name; otherwise, it is the longest prefix of the name that
is a static signal name. Similarly, a static variable name is a static
name that denotes a variable, and the longest static prefix of a
variable name is the name itself, if the name is a static variable name;
otherwise, it is the longest prefix of the name that is a static
variable name.

Examples:
S(C,2) --A static name: C is a static constant.
R(J to 16) --A nonstatic name: J is a signal.
--R is the longest static prefix of R(J to 16).
T(n) --A static name; n is a generic constant.
T(2) --A locally static name."
"

So I think I was right,

regards
Alan


-- 
Alan Fitch

Article: 153341
Subject: Re: Design Notation VHDL or Verilog?
From: Petter Gustad <newsmailcomp6@gustad.com>
Date: Fri, 03 Feb 2012 11:37:48 +0100
Links: << >>  << T >>  << A >>
Andy <jonesandy@comcast.net> writes:

> On Jan 31, 2:00 pm, Petter Gustad <newsmailco...@gustad.com> wrote:
>> Andy <jonesa...@comcast.net> writes:
>> > there is negligible advantage to using VHDL over verilog. It is at
>> > higher levels of abstraction, where design productivity is maximized,
>> > that VHDL shines.
>>
>> It depends of the Verilog version in question and the type of design.
>> SystemVerilog has a much higher level of abstraction when it comes to
>> verification and testbench code. Especially when using libraries like
>> UVM etc.
>>
>> //Petter
>>
>> --
>> .sig removed by request.
>
> AFAIK, the synthesizable subset of SV is pretty much plane old
> verilog, with all its warts.

There are several nice features like interfaces (which are
bi-directional unlike record bundles in VHDL). There's also enum's,
always_comb, always_ff, always_latch to tell the synthesis tool what
you're trying to do, $left, $right, etc. similar to 'left, 'right in
VHDL.

> For verification, SV is indeed really nice. However, a new open source
> VHDL Verification Methodology (VVM) library of VHDL packages has
> emerged that provides VHDL with some of the capabilities of SV with
> OVM/UVM.

I took a link at some of the links posted here previously, it looks
nice and it's an improvement. But as far as I could tell it lacked
polymorphism and inheritance. However, I have to study VVM further.

//Petter
-- 
.sig removed by request. 

Article: 153342
Subject: Re: Design Notation VHDL or Verilog?
From: Petter Gustad <newsmailcomp6@gustad.com>
Date: Fri, 03 Feb 2012 11:45:07 +0100
Links: << >>  << T >>  << A >>
Andy <jonesandy@comcast.net> writes:

> know, and good for the industry. Which tools support these features (I
> assume at least Synplify)? As soon as someone standardizes a library

Quite a few tools actually. DC, Synplify, and Quartus has pretty good
SV support. XST does not, but hopefully that will change with Rodin¹.


//Petter
¹http://www.deepchip.com/items/0494-07.html

-- 
.sig removed by request. 

Article: 153343
Subject: Re: Design Notation VHDL or Verilog?
From: nico@puntnl.niks (Nico Coesel)
Date: Fri, 03 Feb 2012 11:43:59 GMT
Links: << >>  << T >>  << A >>
Petter Gustad <newsmailcomp6@gustad.com> wrote:

>Andy <jonesandy@comcast.net> writes:
>
>> On Jan 31, 2:00 pm, Petter Gustad <newsmailco...@gustad.com> wrote:
>>> Andy <jonesa...@comcast.net> writes:
>>> > there is negligible advantage to using VHDL over verilog. It is at
>>> > higher levels of abstraction, where design productivity is maximized,
>>> > that VHDL shines.
>>>
>>> It depends of the Verilog version in question and the type of design.
>>> SystemVerilog has a much higher level of abstraction when it comes to
>>> verification and testbench code. Especially when using libraries like
>>> UVM etc.
>>>
>>> //Petter
>>>
>>> --
>>> .sig removed by request.
>>
>> AFAIK, the synthesizable subset of SV is pretty much plane old
>> verilog, with all its warts.
>
>There are several nice features like interfaces (which are
>bi-directional unlike record bundles in VHDL). There's also enum's,

Record bundles can be bi-directional in VHDL!

-- 
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------

Article: 153344
Subject: Re: Design Notation VHDL or Verilog?
From: Petter Gustad <newsmailcomp6@gustad.com>
Date: Fri, 03 Feb 2012 13:53:44 +0100
Links: << >>  << T >>  << A >>
nico@puntnl.niks (Nico Coesel) writes:

>>There are several nice features like interfaces (which are
>>bi-directional unlike record bundles in VHDL). There's also enum's,
>
> Record bundles can be bi-directional in VHDL!

That's great news as I've always used one record type as input and
another one as output. Care to share some examples? And how do you
swap them like you do with modports in SV.

//Petter
-- 
.sig removed by request. 

Article: 153345
Subject: Re: Design Notation VHDL or Verilog?
From: nico@puntnl.niks (Nico Coesel)
Date: Fri, 03 Feb 2012 13:38:34 GMT
Links: << >>  << T >>  << A >>
Petter Gustad <newsmailcomp6@gustad.com> wrote:

>nico@puntnl.niks (Nico Coesel) writes:
>
>>>There are several nice features like interfaces (which are
>>>bi-directional unlike record bundles in VHDL). There's also enum's,
>>
>> Record bundles can be bi-directional in VHDL!
>
>That's great news as I've always used one record type as input and
>another one as output.

Just declare the port as inout. Its simple as that.

-- 
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------

Article: 153346
Subject: Re: Design Notation VHDL or Verilog?
From: Gabor <gabor@szakacs.invalid>
Date: Fri, 03 Feb 2012 09:18:00 -0500
Links: << >>  << T >>  << A >>
glen herrmannsfeldt wrote:
[snip]
> As far as I know, currently if something can be done either with
> an FPGA or a small microcontroller, or even DSP processor,
> the choice is often for the processor. With the price of smaller
> FPGAs coming down, that might change, but also there are more
> people who know how to program such processors than HDL design.
> 
> Given that, I would expect some overlap between FPGA/HDL
> applications and processor/software applications, depending
> on the speed required at the time. Some applications might 
> initially be developed in software, debugged and tested, before
> moving to an FPGA/HDL implementation. 
> 
> Maybe the question isn't whether scaled fixed point is useful
> in HDL, but why isn't it useful, enough that people ask for
> it in an HLL, in software! Why no scaled fixed point datatype
> in C or Java? 

You may as well ask why we're not using slide rules instead
of calculators.  Once you have floating point arithmetic
with reasonable performance, fixed-point becomes of little
value.  If you really want to keep track of your binary point
instead of letting the floating point package do it for you,
you can always use integer types.  Anyone who has worked on
fixed point FFT logic knows just what a pain it can be to
keep track of scaling.  I don't know many C or Java programmers
willing to take on that task.  At my company, the only person
who ever wrote fixed point code for processing had a background
in microcoding and his whole job was just to create accellerated
image processing libraries.

Obviously for hardware floating point is a huge resource hog,
but when you're using a microprocessor that already has it
built in there's no big incentive not to use it.

-- Gabor

Article: 153347
Subject: Xilinx Artix-7 availability
From: Arne Pagel <arne@pagelnet.de>
Date: Sat, 04 Feb 2012 13:14:46 +0100
Links: << >>  << T >>  << A >>
did anybody hear something about the availability about the Xilinx Artix-7 series?

Especially I am interested in the XC7A8 or XC7A15 in the FTG256 Package.


regards
    Arne

Article: 153348
Subject: Re: Xilinx Artix-7 availability
From: Uwe Bonnes <bon@elektron.ikp.physik.tu-darmstadt.de>
Date: Sat, 4 Feb 2012 13:24:14 +0000 (UTC)
Links: << >>  << T >>  << A >>
Arne Pagel <arne@pagelnet.de> wrote:
> did anybody hear something about the availability about the Xilinx
> Artix-7 series?

No

> Especially I am interested in the XC7A8 or XC7A15 in the FTG256 Package.

Expect genaral availablity somewhere around 2013.

ISE 13.4 still has no BSD files for Artix-7...

Look at how long ot took for Spartan-6, Spartan-3XX etc...

-- 
Uwe Bonnes                bon@elektron.ikp.physik.tu-darmstadt.de

Institut fuer Kernphysik  Schlossgartenstrasse 9  64289 Darmstadt
--------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------

Article: 153349
Subject: Re: Xilinx Artix-7 availability
From: John Adair <g1@enterpoint.co.uk>
Date: Sat, 4 Feb 2012 05:41:55 -0800 (PST)
Links: << >>  << T >>  << A >>
The small parts have been removed from the product table recently
http://www.xilinx.com/publications/prod_mktg/Artix7-Product-Table.pdf
leaving only the biggest 3 parts on the list. You might speculate what
is happening but it might be the Spartan-6 is good enough for this
sector and they don't do a gen7 set of parts here. Equally well they
may have other plans.

Whatever is left in th family is probably 6+ months away until we see
ES silicon and longer for production volumes.

John Adair
Enterpoint Ltd.

On Feb 4, 12:14=A0pm, Arne Pagel <a...@pagelnet.de> wrote:
> did anybody hear something about the availability about the Xilinx Artix-=
7 series?
>
> Especially I am interested in the XC7A8 or XC7A15 in the FTG256 Package.
>
> regards
> =A0 =A0 Arne




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search