Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarApr2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 150050

Article: 150050
Subject: Re: Lattice XO2 video
From: Gabor <gabor@alacron.com>
Date: Tue, 7 Dec 2010 13:14:27 -0800 (PST)
Links: << >>  << T >>  << A >>
On Dec 7, 3:41=A0pm, Sebastien Bourdeauducq
<sebastien.bourdeaud...@gmail.com> wrote:
> On 6 d=E9c, 23:56, Gabor <ga...@alacron.com> wrote:
>
> >=A0Xilinx is already announcing Virtex 8
>
> Really? Could not find a trace of said announcement.
>
> > while not shipping V7.
>
> And neither cheap and large quantities of S6 (maybe until recently).

Must have been dreaming.  Probably got confused with 28 nm Virtex 7 HT
announcement...

Although it seems someone was talking about 22 nm...

Article: 150051
Subject: Re: Opinions on Lattice ECP3
From: David Brown <david@westcontrol.removethisbit.com>
Date: Wed, 08 Dec 2010 09:49:25 +0100
Links: << >>  << T >>  << A >>
On 07/12/2010 17:58, Nial Stewart wrote:
>> The Cyclone IV GX are a definite possibility - unfortunately the currently-available EP4CGX15 is
>> too small (though the price is nice), while the EP4CGX110 is far too big.  The EP4CGX22 or 30
>> would be more appropriate, if they existed.
>
>
> David, push your local Altera distributor for availability.
>
> My tame (reliable) Alera FAE has told me they are imminent (engineering silicon
> available now I think).
>

Thanks for that info.  My Altera distributors are also very good.  I 
haven't contacted them yet because I'm still gathering information, 
vague price estimates and ideas - FPGAs are only one of them.  But once 
we have come a bit further, we will certainly contact distributors.

mvh.,

David


Article: 150052
Subject: Re: Linux on Microblaze
From: David Brown <david@westcontrol.removethisbit.com>
Date: Wed, 08 Dec 2010 10:40:59 +0100
Links: << >>  << T >>  << A >>
On 07/12/2010 21:45, Sebastien Bourdeauducq wrote:
> On 7 dc, 18:13, Tim Wescott<t...@seemywebsite.com>  wrote:
>> Well, that was my point -- if you're going to make money off of GPL'd
>> software, you're going to sell services, and give the software away for
>> free.
>
> Which has the perverse effect of giving an incentive for free software
> service companies to write obscure and poorly documented code, so they
> can stay in business. See for example what Codesourcery does.
>

I think that is quite simply incorrect.  Companies that intend to make 
money from code - free or otherwise - usually aim to be as professional 
about it as they can.

Codesourcery does not write "obscure and poorly documented" code - at 
least, not gcc and related tools.  I can't answer for their other 
products like the various libraries they make.  Codesourcery do not 
write gcc alone - they are working with a massive existing code base 
that has been developed over a long time by many companies and 
individuals.  It is certainly fair to describe a lot of gcc as "obscure" 
code, although many parts of it is reasonably well documented.  However, 
you can't blame the intertwined structure of gcc on CodeSourcery - much 
of it stems back to RMS's original design decisions, which included a 
highly monolithic design to make it difficult for commercial compiler 
developers to steal parts of the code.




Article: 150053
Subject: Re: Concurrent Logic Timing
From: "RCIngham" <robert.ingham@n_o_s_p_a_m.n_o_s_p_a_m.gmail.com>
Date: Wed, 08 Dec 2010 06:07:22 -0600
Links: << >>  << T >>  << A >>
>>On Dec 6, 1:00=A0pm, Andy <jonesa...@comcast.net> wrote:
>>> I think I would use a function for the intermediate calculation, and
>>> then call the function in both concurrent assignment statements per
>>> the original implementation.
>>>
>>> Integers give you the benefits of bounds checking in simulation (even
>>> below the 2^n granularity if desired), and a big improvement in
>>> simulation performance, especially if integers are widely used in the
>>> design (instead of vectors).
>>>
>>> Andy
>>
>>I know everyone says that integers run faster, but is this a
>>significant effect?  Has it been measured or at least verified on
>>current simulators?
>>
>>Rick
>>
[Correction]

They certainly use less memory in simulation than wide vectors. An integer
(32 bit) is 4 bytes. A std_logic_vector (9 states) is 4 bits per bit. If
your data is > 8 bits in width, integers are more efficient.

Also, no need for resolution function calls, either.
	   
					
---------------------------------------		
Posted through http://www.FPGARelated.com

Article: 150054
Subject: Re: Concurrent Logic Timing
From: Jonathan Bromley <spam@oxfordbromley.plus.com>
Date: Wed, 8 Dec 2010 07:13:59 -0800 (PST)
Links: << >>  << T >>  << A >>
On Dec 8, 12:07=A0pm, "RCIngham" wrote:

[integers]
> certainly use less memory in simulation than wide vectors. An integer
> (32 bit) is 4 bytes. A std_logic_vector (9 states) is 4 bits per bit. If
> your data is > 8 bits in width, integers are more efficient.

That's the minimum memory footprint.  In practice, simulators probably
use a representation that gives them less work to do in packing and
unpacking the
bits.  For example, enumerations are probably stored in 8 bits rather
than 4.

Having said that, all tool vendors have accelerated implementations
of numeric_std and they may have other, proprietary representations
to support the acceleration. I don't know if this really happens, but
if I were doing it I'd be tempted to use a 32-bit integer to store
the
value if it has only 0/1 bits in it, which is true most of the time
for most uses of signed/unsigned vectors.  And then I'd keep
the full vector in an array of bytes, and a flag to say which
representation was currently in use.  That would allow for
excellent arithmetic performance in the majority of cases,
while allowing full std_logic behaviour if needed.

> Also, no need for resolution function calls, either.

If a simulator can statically determine that a
signal has only one driving process, it can
skip the resolution function altogether.
Once again I don't know whether real commercial
simulators do this, but it seems like an obvious
and easy optimization for the IEEE standard types,
all of which have well-behaved resolution functions
that are identity functions for a single driver.
--
Jonathan Bromley

Article: 150055
Subject: Re: Getting libusb-driver to work with Xilinx dev board.
From: Bob Smith <bsmith@linuxtoys.org>
Date: Wed, 08 Dec 2010 09:21:10 -0800
Links: << >>  << T >>  << A >>
rupertlssmith@googlemail.com wrote:
> I could not get windrv to work, as running Debian, and it seems very
> picky about the version/distro it will work with, so have gone down
> the route of trying to get the open source drivers working. Does any
> body have any experience of getting the libusb-driver to work? I'm
> following the instructions here:


Our Demand Peripherals BaseBoard4 might be a little expensive
for your needs but it works very well with Linux and the Xilinx
tools.  There is even a sample Makefile for the Xilinx tool set.

Bob


Article: 150056
Subject: spacewire project on opencores.org
From: Alessandro Basili <alessandro.basili@cern.ch>
Date: Wed, 08 Dec 2010 20:10:51 +0100
Links: << >>  << T >>  << A >>
Hi everyone,
after some struggles I have eventually found the time to revive an old 
project on opencores which hasn't been updated since a while:

a spacewire link and router.

I have just been assigned as co-maintainer since the original one seems 
not available since a while.
I intend to bring back the status of the project to "planning", since I 
would like to discuss again the structure of the project, starting from 
the specification documentation and the overall design structure.

I'd like to stress that I am not a spacewire expert, but I have been 
working on a "modified" version of it that is in use in the AMS-02 
experiment (http://ams.cern.ch) which is ready to be launched next year 
on the International Space Station.

At the moment I would like to share my motivation, hoping to find some 
feedback and some interest.

The purpose of the spacewire standard is (citation from the 
ECSS‐E‐ST‐50‐12C):

- to facilitate the construction of high‐performance on‐board 
data‐handling systems;
- to help reduce system integration costs;
- to promote compatibility between data‐handling equipment and subsystems;
- to encourage reuse of data‐handling equipment across several different
missions.

In this respect a handful of firms have grown to provide SoC know-how 
and system integration capabilities to "serve" space exploration and 
space science. ESA for example is promoting R&D in order to improve 
european space industry sector.
Even though I do understand the commercial impact of this approach, I 
still believe that we can do much more through an open platform, 
improving the quality of the solutions and allowing for a greater 
spectrum of products.
In my limited experience I have been working on two space experiments 
(pamela.roma2.infn.it and ams.cern.ch) and witnessed other four at least 
(ALTEA, GLAST-FERMI, LAZIO-SiRAD, AGILE). A great deal of development 
was focused on the onboard data-handling systems, with ad-hoc interfaces 
and non-standard solutions.
We had the possibility to adopt spacewire, but the "closed" solutions 
provided by the industry is rather counter productive in an open 
environment like the one of the academic collaborations we have (costs 
are rather high and liability is often unclear).
This is where open IP cores may come in action and empower low-budget 
experiments to build reliable and reusable systems cutting the 
development costs and enabling them to focus on science.
The industry itself may benefit from this approach, since a good 
licensing policy (like the LGPL) may foster interests and wide spread 
the usage (hence enhancing the reliability) of these IP cores.

A more reliable and widely used standard gives a tremendous boost to our 
space related dreams and even though it's just a piece of wire, I 
believe it still build bridges worldwide.

Any feedback is appreciated.

Al

p.s.: this post will be on opencores.org forum as well.

-- 
Alessandro Basili
CERN, PH/UGC
Electronic Engineer

Article: 150057
Subject: Re: Concurrent Logic Timing
From: Kolja Sulimma <ksulimma@googlemail.com>
Date: Wed, 8 Dec 2010 12:04:05 -0800 (PST)
Links: << >>  << T >>  << A >>
On 7 Dez., 17:21, "RCIngham"
<robert.ingham@n_o_s_p_a_m.n_o_s_p_a_m.gmail.com> wrote:
> >On Dec 6, 1:00=A0pm, Andy <jonesa...@comcast.net> wrote:
> Also, no need for resolution function calls, either.

Yes. Everybody should be using std_ulogic instead of std_logic. It is
a lot faster.
It even catches some bugs earlier because now multiple drivers are
illegal at compile time.

BUT: The tool vendors make it very hard to do that.
For some reason only known to Xilinx (other vendors are similar), all
entities in timing simulation
models are changed to std_logic. Also, all ports of their IP are
std_logic.

Todays devices do not even support internal tristate signals, so why
use resolution functions?

Kolja

Article: 150058
Subject: Re: Multiple clock domains
From: Vaughn <vaughnbetz@gmail.com>
Date: Wed, 8 Dec 2010 12:16:28 -0800 (PST)
Links: << >>  << T >>  << A >>
Replaying to the question below:

> Does Altera check hold times?

Yes, Altera tools check hold times.  We check all timing constraints:
setup & hold, plus the asynchronous reset equivalents (recovery &
removal) at all timing corners (3 in our latest devices), with on-die
variation and jitter models applied at each corner.  The corners cover
large, correlated variation, while the "within corner" min/max
variation and clock uncertainty models cover less correlated variation
and timing jitter.

If TimeQuest says it works, and you have properly timing constrained
the design, it will be robust.

Regards,

Vaughn Betz
Altera

Hal Murray wrote:
> In article <icmns0$f6j$1@news.eternal-september.org>,
>  glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
> >alessandro.strazzero@gmail.com <alessandro.strazzero@gmail.com> wrote:
> >
> >> I'm a newbye in VHDL and I would like to submit a question. I have a
> >> system wich is clocked at 24MHz by an external oscillator.
> >> This clock is the input of a PLL internal to the FPGA.
> >> The outputs of the PLL are two clocks: one at 48MHz to clock
> >> a CPU and a custom logic called "A", and the other one at 24MHz to
> >> clock a custom logic called "B".
> >(snip)
> >
> >> The question is: do the custom logic "B" signals have to be
> >> syncronized with the 48MHz clock ?
> >
> >If you can meet the setup and hold times, then you are safe.
>
> Right.  But there are two tricky areas.
>
> One is that you have to include clock skew, and that includes
> the skew between the two clocks.
>
> The other is that you have to check the hold time.  Several/many
> years ago, somebody reported this sort of problem on a Xilinx part.
> Xilinx doesn't check hold times.  They are guaranteed-by-design.
> Except something didn't work if there was a little extra clock
> skew due to using two different clocks.
>
> Does Altera check hold times?
>
> --
> These are my opinions, not necessarily my employer's.  I hate spam.

Article: 150059
Subject: Re: Concurrent Logic Timing
From: Mike Treseler <mtreseler@gmail.com>
Date: Wed, 08 Dec 2010 14:10:35 -0800
Links: << >>  << T >>  << A >>
On 12/8/2010 12:04 PM, Kolja Sulimma wrote:

> Yes. Everybody should be using std_ulogic instead of std_logic. It is
> a lot faster.
> It even catches some bugs earlier because now multiple drivers are
> illegal at compile time.
>
> BUT: The tool vendors make it very hard to do that.
> For some reason only known to Xilinx (other vendors are similar), all
> entities in timing simulation
> models are changed to std_logic. Also, all ports of their IP are
> std_logic.

It is std_ulogic_vector that is the problem.
std_ulogic is compatible for bits.

                -- Mike Treseler

Article: 150060
Subject: LPDDR on spartan-3e
From: jonpry <jonpry@gmail.com>
Date: Wed, 8 Dec 2010 15:30:00 -0800 (PST)
Links: << >>  << T >>  << A >>
Hi All,

   I have a spartan-3e board with a piece of LPDDR on it. After
modifying the MiG sources initialization stuff, I was able to get the
user_example running in the simulator. In hardware I can see that the
chip is bursting out what was written to it previously. However,
inside of the fpga the data read back from the memory is not correct.
I originally suspected the DQS delay circuitry and built a simple
module that causes the MiG design to cycle through all 6 dqs taps at 1
per second. None of the taps result in good read back. I am confused
as to what could be causing the problem.

   I've noticed that in the simulator, things go badly if I run the
design too slow. Anything slower than 12ns period causes read errors.
Haven't managed to track down the source of this, but it seems to be
related to some confusion in the data generator.

Any advice would be appreciated.

Thanks,

Jon Pry

Article: 150061
Subject: Re: Lattice XO2 video
From: Mittens <mittens@_nospam_hush.ai>
Date: Thu, 09 Dec 2010 00:07:23 -0000
Links: << >>  << T >>  << A >>
On Mon, 06 Dec 2010 11:13:52 -0000, Mike Harrison <mike@whitewing.co.uk>  
wrote:

> http://www.youtube.com/watch?v=h_USk-HNgPA&feature=player_detailpage
>
> Come on X and A - spice up your promo  videos!

I recall Altera did have some odd promo material bundled in with their  
Cubic Cyclonium. A bike race around the inside of their office maybe?  
Anyhow I guess it probably pre-dates YouTube by a few years.

Article: 150062
Subject: Re: Concurrent Logic Timing
From: KJ <kkjennings@sbcglobal.net>
Date: Wed, 8 Dec 2010 19:42:15 -0800 (PST)
Links: << >>  << T >>  << A >>
On Dec 8, 5:10=A0pm, Mike Treseler <mtrese...@gmail.com> wrote:
> On 12/8/2010 12:04 PM, Kolja Sulimma wrote:
>
> > Yes. Everybody should be using std_ulogic instead of std_logic. It is
> > a lot faster.
> > It even catches some bugs earlier because now multiple drivers are
> > illegal at compile time.
>
> > BUT: The tool vendors make it very hard to do that.
> > For some reason only known to Xilinx (other vendors are similar), all
> > entities in timing simulation
> > models are changed to std_logic. Also, all ports of their IP are
> > std_logic.
>
> It is std_ulogic_vector that is the problem.
> std_ulogic is compatible for bits.
>

Yes, but it is mostly an easily overcome problem.  If your signals are
all std_u and you're caught having to interface to a widget with std_l
then the vector conversions can be made right in the port map.

The_std_logic_widget :entity std_logic_widget port map
(
   gazinta_std_logic_vector =3D>
std_logic_vector(some_std_ulogic_vector),
   std_ulogic_vector(gazouta_std_logic_vector) =3D>
some_other_std_ulogic_vector
);

It does make for a bit more space on the lines in the port map in
order to line up the =3D> into a tidy column...but that can be improved
with two short name aliases if you want.  That way you can always use
std_ulogic/vector everywhere you write your own code to get the
benefit of the compiler catching multiple drivers without having to
debug to find that problem.

Kevin Jennings

Article: 150063
Subject: Re: spacewire project on opencores.org
From: Sebastien Bourdeauducq <sebastien.bourdeauducq@gmail.com>
Date: Thu, 9 Dec 2010 02:36:07 -0800 (PST)
Links: << >>  << T >>  << A >>
On 8 d=E9c, 20:10, Alessandro Basili <alessandro.bas...@cern.ch> wrote:
> after some struggles I have eventually found the time to revive an old
> project on opencores which hasn't been updated since a while:

I would not bother. Why not simply use GRLIB's code?
http://www.gaisler.com/cms/index.php?option=3Dcom_content&task=3Dview&id=3D=
357&Itemid=3D82
While GRLIB also has some of the problems that plague most Opencores
designs (namely, slowness and resource utilization through the roof),
at least the cores work, are supported and are well documented.

S.

Article: 150064
Subject: Re: LPDDR on spartan-3e
From: jonpry <jonpry@gmail.com>
Date: Thu, 9 Dec 2010 06:10:08 -0800 (PST)
Links: << >>  << T >>  << A >>
> =A0 =A0I've noticed that in the simulator, things go badly if I run the
> design too slow. Anything slower than 12ns period causes read errors.
> Haven't managed to track down the source of this, but it seems to be
> related to some confusion in the data generator.

   I've looked into this a little further. It appears that at slower
clock speeds, rst_dqs_delay is not going low until slightly after the
last dqs clock in the burst. Causing the fifo write flag to stay
enabled until after the first word of the next transfer, be it read or
write. Thus getting the data patterns all screwed up. I've yet to
determine if this is really happening in hardware. I'm also not
convinced about the MiG behavioral test bench as it does not include
assignment delays anywhere.

   Ideally I would like to run my LPDDR at very low speed to rule out
signal integrity problems until the logic is proven. There is no DLL
in the memory and it seems to operate fine down at 10mhz. That being
said, I have tried the design at all manner of speeds with little
difference. Any experience out there with getting MiG slowed down?

Article: 150065
Subject: Re: spacewire project on opencores.org
From: Alessandro Basili <alessandro.basili@cern.ch>
Date: Thu, 09 Dec 2010 16:41:57 +0100
Links: << >>  << T >>  << A >>
On 12/9/2010 11:36 AM, Sebastien Bourdeauducq wrote:
> On 8 dc, 20:10, Alessandro Basili<alessandro.bas...@cern.ch>  wrote:
>> after some struggles I have eventually found the time to revive an old
>> project on opencores which hasn't been updated since a while:
>
> I would not bother. Why not simply use GRLIB's code?
> http://www.gaisler.com/cms/index.php?option=com_content&task=view&id=357&Itemid=82
> While GRLIB also has some of the problems that plague most Opencores
> designs (namely, slowness and resource utilization through the roof),
> at least the cores work, are supported and are well documented.
>
> S.

I believe you are referring to the gpl package and not to the
copyrighted version. I don't quite believe you would have so much
support (at least from them) unless you get the proprietary one and 
secondly the GRLIB promotes the AMBA bus as SoC, which is a rather 
complex bus compared to the Wishbone.

IMHO GRLIB is a great effort to provide a fully integrated system on
chip (either FPGA or ASIC) that I do not dare to achieve (or compete 
against). On the contrary my intent is to have a simple enough IP Core 
which can be easily integrated and reused in order to promote the protocol.

But I do appreciate your comment and I will consider the possibility to 
publish only the spacewire part out of the whole library, maybe 
stripping off the amba interface, even though I need to evaluate the 
licensing issue.

Al

Article: 150066
Subject: Interfacing DS92LV1021 with FPGA serdes
From: "j." <garas.rez@gmail.com>
Date: Thu, 9 Dec 2010 08:02:45 -0800 (PST)
Links: << >>  << T >>  << A >>
Given a Data Link that uses DS92LV1021 chip (16-40MHz 10-bit bus LVDS
Serializer) to send data. Unfortunately it also sends start and stop
bits, 12-bit each word total (@480MHz). Task is to build a receiver in
FPGA. However, both Xilinx and Altera built-in SERDES circuits support
only 10-bit words ("deserialization factor"). Is there any way to
interface them with the above DS92LV1021 chip?

Article: 150067
Subject: Re: LPDDR on spartan-3e
From: "maxascent" <maxascent@n_o_s_p_a_m.n_o_s_p_a_m.yahoo.co.uk>
Date: Thu, 09 Dec 2010 10:47:37 -0600
Links: << >>  << T >>  << A >>
>> =A0 =A0I've noticed that in the simulator, things go badly if I run the
>> design too slow. Anything slower than 12ns period causes read errors.
>> Haven't managed to track down the source of this, but it seems to be
>> related to some confusion in the data generator.
>
>   I've looked into this a little further. It appears that at slower
>clock speeds, rst_dqs_delay is not going low until slightly after the
>last dqs clock in the burst. Causing the fifo write flag to stay
>enabled until after the first word of the next transfer, be it read or
>write. Thus getting the data patterns all screwed up. I've yet to
>determine if this is really happening in hardware. I'm also not
>convinced about the MiG behavioral test bench as it does not include
>assignment delays anywhere.
>
>   Ideally I would like to run my LPDDR at very low speed to rule out
>signal integrity problems until the logic is proven. There is no DLL
>in the memory and it seems to operate fine down at 10mhz. That being
>said, I have tried the design at all manner of speeds with little
>difference. Any experience out there with getting MiG slowed down?
>

Cant say I am a great fan of MIG. The design seems incredibly bloated and
not very easy to get to run at a reasonable speed. I ended up writing my
own DDR2 controller.

You should check that MIG and the device allows you to run at such a slow
speed. You really need a good simulation to start with all timings
verified. Check the datasheet to verify that no timings are being violated.
Can you not look at the data on a scope to see if you are getting the
correct signals and verify timing? Memory can be a pain to get working so
you need to be as meticulous as possible.

Regards

Jon	   
					
---------------------------------------		
Posted through http://www.FPGARelated.com

Article: 150068
Subject: Re: Interfacing DS92LV1021 with FPGA serdes
From: Gabor <gabor@alacron.com>
Date: Thu, 9 Dec 2010 08:50:03 -0800 (PST)
Links: << >>  << T >>  << A >>
On Dec 9, 11:02=A0am, "j." <garas....@gmail.com> wrote:
> Given a Data Link that uses DS92LV1021 chip (16-40MHz 10-bit bus LVDS
> Serializer) to send data. Unfortunately it also sends start and stop
> bits, 12-bit each word total (@480MHz). Task is to build a receiver in
> FPGA. However, both Xilinx and Altera built-in SERDES circuits support
> only 10-bit words ("deserialization factor"). Is there any way to
> interface them with the above DS92LV1021 chip?

Many of the newer Xilinx chips can do deserialization at these rates
using standard
I/O's rather than the "GTP" or "Rocket I/O".  I expect newer Altera
parts can do the
same.  For Xilinx there are some app notes on "high-speed LVDS" you
can look for
on their site.

-- Gabor

Article: 150069
Subject: Re: LPDDR on spartan-3e
From: Gabor <gabor@alacron.com>
Date: Thu, 9 Dec 2010 08:55:00 -0800 (PST)
Links: << >>  << T >>  << A >>
On Dec 9, 11:47=A0am, "maxascent"
<maxascent@n_o_s_p_a_m.n_o_s_p_a_m.yahoo.co.uk> wrote:
> >> =3DA0 =3DA0I've noticed that in the simulator, things go badly if I ru=
n the
> >> design too slow. Anything slower than 12ns period causes read errors.
> >> Haven't managed to track down the source of this, but it seems to be
> >> related to some confusion in the data generator.
>
> > =A0 I've looked into this a little further. It appears that at slower
> >clock speeds, rst_dqs_delay is not going low until slightly after the
> >last dqs clock in the burst. Causing the fifo write flag to stay
> >enabled until after the first word of the next transfer, be it read or
> >write. Thus getting the data patterns all screwed up. I've yet to
> >determine if this is really happening in hardware. I'm also not
> >convinced about the MiG behavioral test bench as it does not include
> >assignment delays anywhere.
>
> > =A0 Ideally I would like to run my LPDDR at very low speed to rule out
> >signal integrity problems until the logic is proven. There is no DLL
> >in the memory and it seems to operate fine down at 10mhz. That being
> >said, I have tried the design at all manner of speeds with little
> >difference. Any experience out there with getting MiG slowed down?
>
> Cant say I am a great fan of MIG. The design seems incredibly bloated and
> not very easy to get to run at a reasonable speed. I ended up writing my
> own DDR2 controller.
>
> You should check that MIG and the device allows you to run at such a slow
> speed. You really need a good simulation to start with all timings
> verified. Check the datasheet to verify that no timings are being violate=
d.
> Can you not look at the data on a scope to see if you are getting the
> correct signals and verify timing? Memory can be a pain to get working so
> you need to be as meticulous as possible.
>
> Regards
>
> Jon =A0 =A0 =A0 =A0
>
> --------------------------------------- =A0 =A0 =A0 =A0
> Posted throughhttp://www.FPGARelated.com

There are other issues with LPDDR if you mean "mobile" low-power
parts.  The start-up initialization sequence is different as well as
the
I/O standards and DQS timing.  They do _not_ have delay-locked
loops in them, so read timing almost works better using your internal
clock than the DQS signal.  Also I used them with Lattice parts that
have special stunt logic for DDR, and had to scrap their DQS recovery
because of the I/O standard and the fact that their preamble
detector didn't work unless you had SSTL (it used the difference
in voltage level between AC and DC low in the standard).

-- Gabor

Article: 150070
Subject: Re: LPDDR on spartan-3e
From: jonpry <jonpry@gmail.com>
Date: Thu, 9 Dec 2010 09:06:36 -0800 (PST)
Links: << >>  << T >>  << A >>
> Cant say I am a great fan of MIG. The design seems incredibly bloated and
> not very easy to get to run at a reasonable speed. I ended up writing my
> own DDR2 controller.
> You should check that MIG and the device allows you to run at such a slow
> speed. You really need a good simulation to start with all timings
> verified. Check the datasheet to verify that no timings are being violated.
> Can you not look at the data on a scope to see if you are getting the
> correct signals and verify timing? Memory can be a pain to get working so
> you need to be as meticulous as possible.

The MIG controller is a bit complicated, but it does appear to be the
correct architecture for LPDDR parts. On the scope I can see that the
memory is indeed working properly, so its timings must be fine.
Whether or not the memory is meeting the fpga's timing is a different
story.

I've managed to confirm that what is happening in simulation when
clock is slower than 12ns, is indeed what is happening in hardware at
any speed from 10 to 100 mhz.  I guess I will need to rewrite the dqs
fifo enable stuff. I think if dqs comes after some margin the logic is
just broken and won't turn off.

> They do _not_ have delay-locked
> loops in them, so read timing almost works better using your internal
> clock than the DQS signal.

This argument seems backwards to me. There is almost no point on using
DQS on regular DDR parts because the the DLL phase aligns it to the
master clock, giving you a multitude of good options. But with no DLL,
there is no phase guarantee, forcing you to use a truly source
synchronous design.

Article: 150071
Subject: Re: LPDDR on spartan-3e
From: Gabor <gabor@alacron.com>
Date: Thu, 9 Dec 2010 13:33:43 -0800 (PST)
Links: << >>  << T >>  << A >>
On Dec 9, 12:06=A0pm, jonpry <jon...@gmail.com> wrote:
> > Cant say I am a great fan of MIG. The design seems incredibly bloated a=
nd
> > not very easy to get to run at a reasonable speed. I ended up writing m=
y
> > own DDR2 controller.
> > You should check that MIG and the device allows you to run at such a sl=
ow
> > speed. You really need a good simulation to start with all timings
> > verified. Check the datasheet to verify that no timings are being viola=
ted.
> > Can you not look at the data on a scope to see if you are getting the
> > correct signals and verify timing? Memory can be a pain to get working =
so
> > you need to be as meticulous as possible.
>
> The MIG controller is a bit complicated, but it does appear to be the
> correct architecture for LPDDR parts. On the scope I can see that the
> memory is indeed working properly, so its timings must be fine.
> Whether or not the memory is meeting the fpga's timing is a different
> story.
>
> I've managed to confirm that what is happening in simulation when
> clock is slower than 12ns, is indeed what is happening in hardware at
> any speed from 10 to 100 mhz. =A0I guess I will need to rewrite the dqs
> fifo enable stuff. I think if dqs comes after some margin the logic is
> just broken and won't turn off.
>
> > They do _not_ have delay-locked
> > loops in them, so read timing almost works better using your internal
> > clock than the DQS signal.
>
> This argument seems backwards to me. There is almost no point on using
> DQS on regular DDR parts because the the DLL phase aligns it to the
> master clock, giving you a multitude of good options. But with no DLL,
> there is no phase guarantee, forcing you to use a truly source
> synchronous design.

The point I was making is that the DQS pins of the mobile DDR memories
are not phase aligned to the DQ signals.  For normal DDR memories the
DQS is edge aligned to the DQ, which is not easy to use, but with a
90 degree phase-shift circuit in the FPGA (MIG uses this where
possible)
can very accurately center-sample the data.  It is not so easy to get
a
center sampling point using the DQS output of a mobile DDR device.

You can run Mobile DDR much slower than the standard DDR because
it doesn't have the DLL.  You can even gate the clock to it if you
follow
the rules in the data sheet.  When running more slowly, the data eye
gets bigger and is easier to hit without the added complexity of the
DQS signals.

-- Gabor

Article: 150072
Subject: Re: LPDDR on spartan-3e
From: jonpry <jonpry@gmail.com>
Date: Thu, 9 Dec 2010 15:46:50 -0800 (PST)
Links: << >>  << T >>  << A >>
>
> The point I was making is that the DQS pins of the mobile DDR memories
> are not phase aligned to the DQ signals. =A0For normal DDR memories the
> DQS is edge aligned to the DQ, which is not easy to use, but with a
> 90 degree phase-shift circuit in the FPGA (MIG uses this where
> possible)
> can very accurately center-sample the data. =A0It is not so easy to get
> a
> center sampling point using the DQS output of a mobile DDR device.
>

I have read your other posts on the subject. You mentioned some
difference between mobile and standard ddr's dq/dqs relationship.
After seeing this, I made an imho heroic effort to find out what this
difference is, and turned up not much. Maybe it is my particular chip,
but from the datasheet:

DQS edge-aligned with data for READs; center-
aligned with data for WRITEs

Which sounds an awful lot like what you were describing for DDR. From
an implementation perspective, it seems like it would be trivial to
clock out DQS right with the DQ. I can't imagine why they would do
anything else.

> You can run Mobile DDR much slower than the standard DDR because
> it doesn't have the DLL. =A0You can even gate the clock to it if you
> follow
> the rules in the data sheet. =A0When running more slowly, the data eye
> gets bigger and is easier to hit without the added complexity of the
> DQS signals.
>
> -- Gabor

My operation seems to be working now. At least at 50mhz. I get
problems at 100, but there are several things that could be causing
this. I ended up short circuiting the rst_dqs_div stuff that loopbacks
outside of the chip. This allowed the flag enough to time to get read
fifo turned off at the end of the burst. I still am not totally sure
why this fix was needed in the testbench let alone the hardware. And
don't understand the consequences of removing it.

Gabor, thanks for your help anyways, your original posts inspired me
to use LPDDR in my design since xilinx is more negative about the
whole thing.

~Jon

Article: 150073
Subject: Re: Multiple clock domains
From: rickman <gnuarm@gmail.com>
Date: Thu, 9 Dec 2010 18:12:46 -0800 (PST)
Links: << >>  << T >>  << A >>
On Dec 8, 3:16=A0pm, Vaughn <vaughnb...@gmail.com> wrote:
> Replaying to the question below:
>
> > Does Altera check hold times?
>
> Yes, Altera tools check hold times. =A0We check all timing constraints:
> setup & hold, plus the asynchronous reset equivalents (recovery &
> removal) at all timing corners (3 in our latest devices), with on-die
> variation and jitter models applied at each corner. =A0The corners cover
> large, correlated variation, while the "within corner" min/max
> variation and clock uncertainty models cover less correlated variation
> and timing jitter.
>
> If TimeQuest says it works, and you have properly timing constrained
> the design, it will be robust.
>
> Regards,
>
> Vaughn Betz
> Altera

Not trying to pick on Altera as I think this is a problem with all
vendors' products.  But how do you verify that your timing constraints
are correct?  It is easy to say that any part of a design will be
correct if you have done your design work right.  But 90% of
engineering is making sure that it is correct.  There are many
different methods, techniques and tools to assist the process of
verification for nearly every aspect of design work.  I don't know of
any that will verify that your design is properly constrained for
timing.

I have never figured out why no one seems to see this as a problem.
Is there something I am missing?

Rick

Article: 150074
Subject: Re: Concurrent Logic Timing
From: rickman <gnuarm@gmail.com>
Date: Thu, 9 Dec 2010 18:51:55 -0800 (PST)
Links: << >>  << T >>  << A >>
On Dec 8, 10:42=A0pm, KJ <kkjenni...@sbcglobal.net> wrote:
> On Dec 8, 5:10=A0pm, Mike Treseler <mtrese...@gmail.com> wrote:
>
>
>
> > On 12/8/2010 12:04 PM, Kolja Sulimma wrote:
>
> > > Yes. Everybody should be using std_ulogic instead of std_logic. It is
> > > a lot faster.
> > > It even catches some bugs earlier because now multiple drivers are
> > > illegal at compile time.
>
> > > BUT: The tool vendors make it very hard to do that.
> > > For some reason only known to Xilinx (other vendors are similar), all
> > > entities in timing simulation
> > > models are changed to std_logic. Also, all ports of their IP are
> > > std_logic.
>
> > It is std_ulogic_vector that is the problem.
> > std_ulogic is compatible for bits.
>
> Yes, but it is mostly an easily overcome problem. =A0If your signals are
> all std_u and you're caught having to interface to a widget with std_l
> then the vector conversions can be made right in the port map.
>
> The_std_logic_widget :entity std_logic_widget port map
> (
> =A0 =A0gazinta_std_logic_vector =3D>
> std_logic_vector(some_std_ulogic_vector),
> =A0 =A0std_ulogic_vector(gazouta_std_logic_vector) =3D>
> some_other_std_ulogic_vector
> );
>
> It does make for a bit more space on the lines in the port map in
> order to line up the =3D> into a tidy column...but that can be improved
> with two short name aliases if you want. =A0That way you can always use
> std_ulogic/vector everywhere you write your own code to get the
> benefit of the compiler catching multiple drivers without having to
> debug to find that problem.
>
> Kevin Jennings

I don't have a problem with multiple drivers very often, but it could
help once in a while.  But the downside of std_ulogic is that it
doesn't the math operators that unsigned and signed have does it?
Maybe this has been discussed here before and I have forgotten, but if
I primarily use numeric_std types and seldom use std_logic_vector
(heck, forget the math, just saving on the typing is enough for me to
use numeric_std) isn't the whole ulogic/logic thing moot?

I am using integers more as I get used to the ways of getting them to
do the things I want.  Integers are good for math related signals, but
not so good for logic.  There recently was a thread here or in
comp.lang.vhdl about (or got hijacked about) making integers capable
of logic and bit operations.  I think the suggestion was to treat all
integers as being implemented as 2's complement by default.  If you
could do logic operations on integers I might not use anything
else...

But how would the various std_logic states be handled if integers are
used?  If the signal is uninitialized, std_logic shows this with a 'U'
I believe.  You can't have multiple drivers for an integer, so I don't
know that the other values of std_logic would be missed when using
integer.  Do I need 'X', 'H', 'L' if there is only one driver?  I can
see where 'U' and possibly '-' would be useful which you lose with
integers though.

Rick



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarApr2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search