Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 85950

Article: 85950
Subject: Re: CPLD fusemap data - why the secrecy?
From: "Peter Alfke" <alfke@sbcglobal.net>
Date: 18 Jun 2005 17:40:50 -0700
Links: << >>  << T >>  << A >>
Dimiter, I checked your website, and now I understand why you want the
data, You post:

"All our software is written and tested using our own tools under DPS
(our OS) on computers which are our development; and all the hardware
and PCB design are done using our own CAD tools. Our products range
from computer hardware and highly sophisticated software to precision,
wide bandwidth analog instruments."

I personally do not think it is a good idea to do everything
"homebrew". In this country, the successful companies buy every
required tool that can be bought, and only develop the hardware and
software that is unique to their success. That way we take advantage of
the fast-paced competitive innovation of other companies, and we
concentrate our efforts and our capital on what we do best, and what
nobody can do for us.
Bulgaria may have been a different environment in the past, but is'n it
similar now?

Anyhow, now that I know your motivation (even though I disagree with
it), I will try to eliminate some obstacle within Xilinx. No promises!
Peter Alfke


Article: 85951
Subject: Re: CPLD fusemap data - why the secrecy?
From: "dp@tgi-sci.com" <dp@tgi-sci.com>
Date: 18 Jun 2005 18:08:05 -0700
Links: << >>  << T >>  << A >>
>Bulgaria may have been a different environment in the past, but is'n it
>similar now?

It is just where I reside. I invest a significant amount of effort
so that my work is not influenced by the immediate environment,
and apart from some everyday life inconveniencies I manage to do
that.
Regarding the "all in one house" approach, I am not a fan of it,
either.
I just feel I have software resources which are worth being maintained
and preserved for battles to come so I don't want to drop them in
favour
of others, that's all. It does cost me more effort sometimes, but
overall
it is a good deal since I manage to survive on what I have in house
and there is not much else I could do... So as long as I can I'll keep
MS out of my design process.

>Anyhow, now that I know your motivation (even though I disagree with
>it), I will try to eliminate some obstacle within Xilinx. No promises!

Thanks for promising to try!

Dimiter
------------------------------=AD=AD=AD=AD---------------------------=AD-=
=AD-=AD--------------
Dimiter Popoff               Transgalactic Instruments
http://www.tgi-sci.com
------------------------------=AD=AD=AD=AD---------------------------=AD-=
=AD-=AD--------------


Article: 85952
Subject: Re: Idea exploration - Image stabilization by means of software.
From: "Clay S. Turner" <Physics@Bellsouth.net>
Date: Sat, 18 Jun 2005 22:20:07 -0400
Links: << >>  << T >>  << A >>

"Anton Erasmus" <nobody@spam.prevent.net> wrote in message 
news:1119000420.d54828b53b9bcd51f76b2b5b640103a6@teranews...
>>
> I have read a bit about the astronomical image processing mention in
> one of the other posts. One thing that came up quite often, is that
> CCD sensors are linear. i.e. double the exposure time gives double the
> energy recieved. (They do not mention CMOS, but it is probably the
> same.)
> I agree that blurring occurs when taking a single photo when the
> camaera is moved and the exposure is to long. If one can say take in
> stead of 1x 2sec exposure, take 20x 0.1 sec exposures, and stack them
> into a single picture in software, one should end up close to the
> exposure level of the 2sec picture, but without the blur.
> In the astronomical case, the "short" exposure pictures are in minutes
> for a total exposure time of hours. Of course the subject must not
> move during the "short" exposure time.
> If one scaled this down so that the "short" exposure time is in
> milli-seconds or even micro-seconds depending on what the sensors can
> do, then one can do the same thing just at a much faster scale.
> What is the minimum exposure time for a current CMOS sensor before one
> just see the inherant sensor noise ?

Actually in speckle interferometry, around 30 exposures are done per second. 
The idea is the atmosphere is comprised of cells (roughly 10 to 20 cm in 
diameter) that are stationary during a short time frame like a 30th of a 
second. If a telescope's objective is smaller than 10 cm in diameter, you 
won't have a speckle problem.  A sequence of pictures taken at a high rate 
are overlapped so the noise tends to cancel out. A large study of binary 
star orbits has been carried out this way. Looking at point sources (stars) 
allows this simple speckle trick to actually work. With extended images, it 
becomes a whole different problem.

Both Canon and Nikon perform Image Stabilization (Canon's name) and 
Vibration Reduction (Nikon's name) by actually rotating an optical element 
(prism) located at or near an optical center of the lens. Translating the 
sensor requires more effort that rotating a prism to effectively remove 
motion. And yes the lens employs inertial sensors to effect motion 
reduction. I can say with personal experience that they work extremely well. 
Most tests with such optical systems yield a two to three F stop increase in 
exposure time for the same amount of blurring without compensation.

Clay Turner








Article: 85953
Subject: Re: Problem for xilinx!!!
From: "AMDyer@gmail.com" <AMDyer@gmail.com>
Date: 18 Jun 2005 20:33:29 -0700
Links: << >>  << T >>  << A >>
The IDCODE is part of the jtag interface.  You load an
idcode instruction in the jtag instruction register and
the part will connect it's idcode register into the jtag
data register chain.  The s/w will then switch to shift
the data register out and can get a mfg., device, and
revision number from it.

If you have non-xilinx devices in your chain, make sure
you're driving their TRST/ (async tap reset) pin inactive.

Also it's been my experience in the past with impact that
you should just do the most straightforward path to
programming your parts - define the chain, set the config
file and program.  Using any other 'features' of impact
is likely to get it confused.


Article: 85954
Subject: Re: Ideal CPU for FPGA?
From: Philip Freidin <philip@fliptronics.com>
Date: Sun, 19 Jun 2005 04:03:49 GMT
Links: << >>  << T >>  << A >>
On Sun, 19 Jun 2005 01:23:50 +0100, dave <dave@dave.dave> wrote:
>In an unrelated thread on FPGA<->CPU clock speeds, an FPGA developer 
>stated that FPGAs are ill suited to replicating legacy CPU designs.
>
>I never got around to asking why, but a more relevant question is:
>
>What type of CPU architecture are FPGAs more suited to implement?

Actually both questions are relevant, and are best answered together.

When designing a CPU a key ingredient is for the architect to be
totally familiar with the strengths and weaknesses of the underlying
implementation medium. This knowledge then affects the decisions and
tradeoffs that follow. ("Medium" here is the design abstraction
that the CPU designer will be using to implement the CPU. If you work
for Intel in the x86 organization, it is CMOS. If you are using an
FPGA for your canvas, then although they are CMOS chips, your
abstraction is gates/LUTs/flip flops with CE/carry chains  etc.)

Here is my first example: If the goal is a 100 ns instruction cycle
time, and the implementation medium supports wide PROMS at 5 ns, then
a microcoded implementation may be appropriate. If the PROM takes
60 ns, then hardwired decode is the more likely choice.

All things are not equal. The area, speed, power, drive strength,
input load, and other factors do not have the same ratios across all
implementation mediums.

Second example: CMOS, NMOS, PMOS, Bipolar, GaAs, SiGe all have different
"weights" for the above list of characteristics for muxes, flip flops,
memory cells, decoder trees, bus drivers, registers, etc.

What if you have ultra fast logic, but crappy slow I/O, or maybe
your required cycle time is 2 orders of magnitude slower than the
cycle time the hardware can support. Then maybe a serial architecture
is called for.

So when you look at a legacy CPUs, you are looking at a design that
has been crafted to take advantage of the medium it was designed to
be implemented in. Other things affect CPU architecture. It is well
established that to be continually successful in the CPU business, you
need to leverage as best as possible the existing software base, and
development tools. So follow-on products tend to be backward
compatible, and the development tools often seamlessly will take the
source for a program (maybe in C, Fortran, PL/M, ...) and compile to
the newer product. Or maybe not so seamlessly. If it is a major change
the products do not fair well, because once you force the customer to
abandon the existing environment, then they may as well look at
competitors in their product selection process. The Intel 860 and 960
are good examples of a company that lost its way for a while. What this
means is that legacy CPUs often have an amazing amount of baggage that
is there for backward compatibility. Some of it makes no sense for
current applications, but must be there in case some legacy program
comes along. The latest Pentium will still boot DOS 6.1, in 8088 mode,
with silly 1 MB address space, and those inane segment registers.
All this must be replicated for an FPGA equivalent to be called
equivalent.

What all this means is that because legacy CPUs were designed with
some type of silicon implementation medium, the tradeoffs of how the
instruction set is implemented are strongly influenced by this. The
other major influence is the experience of the CPU architect and the
CPUs they have worked on in the past. Oh, and also marketing gets into
the act, and says you have to have decimal adjust instructions, and a
half-carry bit, and handle 1's complement as well as 2's complement
arithmetic. Oh, and the software people say they absolutely need to
have a semaphore instruction and hardware task switching support.
And more ....

Now take the legacy CPU and try to map it onto an FPGA. The medium
is different, and all the careful balancing that was done for the
legacy CPU are totally out of whack for the FPGA implementation. It
is not that it can't be done, it is that it can't be done efficiently.
And you are not allowed to drop the bits that are difficult. If you
do, then you are no longer compatible, and the hoped for leverage of
existing tools or existing code goes out the window.

As to what maps well to FPGAs, the answer is an architecture that is
designed with an FPGA as the target. This tends to be as follows:
Basically RISC style register sets rather than accumulator(s), with
multiple reads and writes per cycle. Pipelined operation (4 or 5
stages). Direct coding of the control bits for the data path in the
instruction encoding (minimal decode trees). Parallel execution of
mutually exclusive functions with the un-needed results thrown away
(Add, barrel shift, priority encode/CLZ, branch calc, TLB search, etc
all occur every cycle, based on the same operands fetched, but only
the needed result is used).

The first CPU in an FPGA (RISC4005/R16) and XSOC/XR16 and NIOS, and
MicroBlaze all look like this. They all easily outperform legacy
CPUs implemented in the same FPGAs, because they are tuned to the
resources that the FPGAs have to offer.


Philip Freidin




Philip Freidin
Fliptronics

Article: 85955
Subject: Re: Upgrading the EDK from 6.3
From: Mark Fleming <mark_fleming@sbcglobal.NOSPAM.net>
Date: Sun, 19 Jun 2005 05:38:03 GMT
Links: << >>  << T >>  << A >>
Simon,

      Xilinx will send you an update CD for the EDK if your subscription 
is still valid.  I purchased BaseX/EDK 6.3 back in December, but had 
problems getting my upgrades for both.  The first problem was that both 
packages were still registered to the vendor I bought them from, rather 
than myself, so I couldn't download the ISE update.  The second problem 
was that I didn't receive the EDK update CD.

      Both problems were resolved relatively quickly and easily by 
calling the Xilinx toll-free support number.  Give them the product 
number and place of purchase, and they'll set you right up.  I had much, 
much less luck with the email support address given in the registration 
acknowledgement email.  Not much help at all...

-- Mark

Simon wrote:
> So, having had something of a forced absence from FPGA's for a few 
> months, I've just been looking at upgrading to 7.1 for both ISE and EDK. 
> My BaseX subscription seems to have allowed me to update ISE to 7.1, but 
> I can't see any way of upgrading my EDK ?
> 
> Questions:
> 
> 1) Is it possible to do an upgrade, or is it a question of re-purchasing 
> the EDK every time there's a release ? I bought it in March (just 
> checked the order :-) and I thought you got upgrades for a year ?
> 
> 2) Will the EDK 'service pack 7_1_01' (which I do seem to have access 
> to) upgrade a 6.3 version of the EDK ?
> 
> 3) Am I simply being stupid and missing the blinking neon lights 
> somewhere on the site saying 'Oy, it's over here'... ?
> 
> Tx for any help :-)
> 
> Simon

Article: 85956
Subject: Re: Ideal CPU for FPGA?
From: "JJ" <johnjakson@yahoo.com>
Date: 18 Jun 2005 23:57:50 -0700
Links: << >>  << T >>  << A >>
Pretty good summary.

>In an unrelated thread on FPGA<->CPU clock speeds, an FPGA developer
>stated that FPGAs are ill suited to replicating legacy CPU designs.

>I never got around to asking why, but a more relevant question is:

>What type of CPU architecture are FPGAs more suited to implement?

I think several of us have said this on occasion or more often, but
there are times when it is justified. I believe one avionics system
does ship with a Z8000 compatible FPGA design at original speed, the
function had to be identical and cycle accurate which only meant
<20MHz. Today <20MHz for general purpose clone of anything isn't
interesting except to history buffs. There is an almost complete
Transputer clone from U Tokyo but still only at original speed about
25MHz and not public. I am sure both of these use vast amounts of FPGA
logic far beyond MB/Nios though. Look also at 68k/x86 cores from some
vendors, they are available but enormous and slow (faster than original
though). Compare Leon/Sparc with MB and OpenCores arch and Leon uses
far more resources and ran at lower actual clock, but in a nice twist
actually was the only complete design and ended on par with MB, MB had
some FPU issues. The open cores design got smoked because it was
designed with ASIC in mind and was very incomplete.


Developing a RISC ground up almost suggests building it bottom up
rather than top down.

Bottom up allows you to find a unit design that works at the desired
speed for each function part, amd then elaborate that into something
that is useful without losing performance along the way. Trouble is, it
is a bit difficult to build cpus this way unless you have a clear
vision all the way back to the top. Eventually all these widgets must
interoperate and some control decisions between them suddenly become
serial, not parallel and the design is broken after much work. I say
that from experience of a design where many parts easily ran separately
close to limits, but could only be integrated with 1 path becoming
brutally awkward, overall perf went down by 3, so I killed that design
approach.

Top down is the way most cpus would naturally be designed when you are
familiar with your materials but as soon as a design is dumped into the
EDA tools one can get nasty surprises very quickly.

What always kills cpu performance is decision making.

For an ASIC, 10 real gate delays can make some pretty complex
structures that can do any of the following. A fast 2 port SRAM, a
64bit adder (Alpha design), a lot of arithmetic mixed control decision
logic. Combine all 3 in parallel and you have a classic RISC in short
order limited to 10gate delay clock cycles which today is 1-4GHz. The
higher end though is all driven by complex transister level circuit
design, not regular std cell.

In an FPGA the BlockRams are as effective if pipelined as in an low end
ASIC. The adder designs are way less effective, we are stuck with
ripple designs because of the general placement of cells. Using complex
designed logic adders doesn't work so well since the free built in
ripple adders are not handicapped by general wiring. In a BlockRam
cycle time you only get 2-3 LUTs depth of logic which is way below
10gates of ASIC decision control.


FPGAs have proved themselves to be pretty good at DSP engines largely
because they don't get up and make decisions on clock cycle boundaries,
they are very good at continuosly streaming data over the same
operators, ie a datapath. Many DSPs that do make some decisions can do
so within a more relaxed timing scheme, say an FFT engine must switch
some controls every 2^Nth clock in predicable block patterns. These
patterns are almost always regular and can be made to go fast enough
too.

CPUs on the other hand demand decisions be made on each and every clock
cycle about any no of things. The more the FPGA cpu looks like a
conventional cpu, the more some of those decisions must be made in a
combined fashion always on each clock. Suppose you branch y/n at
address 0xffff over to 0x10010. That might look like a short +ve branch
but it also may involve the MMU in page faults and or the cache
updateing a new line as well as actually making the bcc decision which
means you don't know where the next op is coming from for sure.


There is only one way that I know to deliver a cpu that runs even
remotely as fast as a DSP and that means no clock to clock decisions,
only decisions made over as many clocks as possible such as 4 or 8.
This leads naturally to threaded designs or barrel engines or in DSP
speak commutators. The datapath-instruction-fetch=decode is made
somewhat longer, not just 5 pipelines but more like 15 with 4 threads
in flight each with a decision to make every 4th clock.

Such an MT design is more complex in several aspects but much simpler
in others. This sort of design looks like a DSP filter with some logic
working effectively at sub clocks. The new complexity lies in the
barrel control keeping 4 or 8 sets of state in flight but in a mostly
regular thread independant manner. The simplicity lies in having 4-8
clocks to actually make a decision for every thread cycle. We will see
more of this sort of design in ASICs too because there the problem is
DRAM cycle latencies, the 4 or 8 way designs can effectively make
memories 4-8 times faster to each thread.

Now when the same MT approach is applied to the DRAM such as Micron
RLDRAM which is also effectively 8way threaded, you get an obvious
match in heaven, 8 instructions can almost be executed with 8 DRAM
cycles but they actually complete every 8 or more clocks ie 20-25ns
window. That allows for some radical simplifications too, no data
cache. Having to use SDRAM fo same task would allow 70ns latency to be
hidden well, but it has very little bank concurrency so the threads
would hardly ever be allowed to do ld/st.


johnjakson at usa dot com
transputer2 at yahoo


Article: 85957
Subject: damage Atmel AT40k/AT94k with wrong bitstream?
From: Adam Megacz <megacz@cs.berkeley.edu>
Date: Sun, 19 Jun 2005 00:38:57 -0700
Links: << >>  << T >>  << A >>

Hey, does anybody know if you can damage Atmel's newer FPGAs with a
bad bitstream (ie vdd-to-ground contention)?

There's no Big Scary Warning in the data sheet (that I could find),
but on the other hand, given the fact that so much of the global
routing is based on pass transistors and they let you create
multi-driver buses, I can't see how they could possibly protect
against this.

This was kind of weird.  I know Xilinx has put the Big Scary Warning
on datasheets for parts which actually can't be damaged this way
(covering their ass?), so I'd expect most vendors to err on the side
of caution.  Hrm.

Anybody know?

  - a

-- 
"I didn't see it then, but it turned out that getting fired was the
 best thing that could have ever happened to me. The heaviness of
 being successful was replaced by the lightness of being a beginner
 again, less sure about everything. It freed me to enter one of the
 most creative periods of my life."

          -- Steve Jobs, commencement speech at Stanford, June 2005

Article: 85958
Subject: Re: comp.arch.fpga.<mfr>
From: Adam Megacz <megacz@cs.berkeley.edu>
Date: Sun, 19 Jun 2005 00:50:49 -0700
Links: << >>  << T >>  << A >>

John_H <johnhandwork@mail.com> writes:
> If one of the "little guys" comes up with a very marketable device,
> the buzz here will tend to get people interested.

Wow.  Very, very, very good point.

  - a


Article: 85959
Subject: Re: Idea exploration - Image stabilization by means of software.
From: "Jon Harris" <jon_harrisTIGER@hotmail.com>
Date: Sun, 19 Jun 2005 00:57:12 -0700
Links: << >>  << T >>  << A >>
> "Anton Erasmus" <nobody@spam.prevent.net> wrote in message
> news:1119000420.d54828b53b9bcd51f76b2b5b640103a6@teranews...
> >>
> > I have read a bit about the astronomical image processing mention in
> > one of the other posts. One thing that came up quite often, is that
> > CCD sensors are linear. i.e. double the exposure time gives double the
> > energy recieved. (They do not mention CMOS, but it is probably the
> > same.)
> > I agree that blurring occurs when taking a single photo when the
> > camaera is moved and the exposure is to long. If one can say take in
> > stead of 1x 2sec exposure, take 20x 0.1 sec exposures, and stack them
> > into a single picture in software, one should end up close to the
> > exposure level of the 2sec picture, but without the blur.

I'm assuming that the software wouldn't simply stack the pictures exactly on top
of each other, but move each one around slightly so as to best align them.

One potential difficulty is the time it takes to read out the data from the
sensor and "clear" it for the next exposure could be significant, especially for
very short exposures.  I don't know specifics, but for example, it might take
3-4 seconds to take your 2 seconds worth of exposure.  Clearly that is not a
good thing!

> > In the astronomical case, the "short" exposure pictures are in minutes
> > for a total exposure time of hours. Of course the subject must not
> > move during the "short" exposure time.

Another difficulty I see is that when you have a severely underexposed picture,
you are going to lose a lot of shadow detail since it ends up below the noise
floor .  A perfectly linear sensor with no noise wouldn't have this problem, but
real-world ones certainly do.  Consider, for example, a picture that has a
black-to-white gradient.  When exposed normally, the camera records levels from
0 (full black) to maximum (full white).  But with an exposure that is 1/20 of
the proper value, much of the dark gray portion will be rendered as 0 (full
black). Even after you add the 20 exposures together, you still get full black
for those dark gray portions.  So everything below a certain dark gray level is
clipped to full black.

In astro-photography, this might not be such a problem, as the unlit sky is
supposed to be black anyway, but with typical photographic scenes, it would be.

> > If one scaled this down so that the "short" exposure time is in
> > milli-seconds or even micro-seconds depending on what the sensors can
> > do, then one can do the same thing just at a much faster scale.
> > What is the minimum exposure time for a current CMOS sensor before one
> > just see the inherant sensor noise ?

As mentioned above, the shorter you try to make your individual exposures, the
more the read/clear time becomes a factor.  State-of-the-art high mega-pixel
cameras today can shoot maybe 8-10 frames/second.  So I doubt the sensors are
capable with dealing with micro-seconds or even low milli-seconds.  Video
cameras can of course shoot 30 frames/second, but their sensors are typically
quite low resolution (i.e. <1 mega-pixel).



Article: 85960
Subject: clock domain : DDR read enable
From: "Ibrahim Magdy" <bibo1978@maktoob.com>
Date: Sun, 19 Jun 2005 01:13:23 -0700
Links: << >>  << T >>  << A >>
Hi,

I am doing a DDR-controller, using a delayed version of the Data strobe to capture the data, however my main problem lies with the clock-enable signal, it comes from another clock domain and it has to be delayed CAS latency, my problem is it violates setup and hold time each time my design speed differs, is there anyway to avoid this?

Article: 85961
Subject: Re: Idea exploration - Image stabilization by means of software.
From: Anton Erasmus <nobody@spam.prevent.net>
Date: Sun, 19 Jun 2005 10:33:58 +0200
Links: << >>  << T >>  << A >>
On Sun, 19 Jun 2005 00:57:12 -0700, "Jon Harris"
<jon_harrisTIGER@hotmail.com> wrote:

>> "Anton Erasmus" <nobody@spam.prevent.net> wrote in message
>> news:1119000420.d54828b53b9bcd51f76b2b5b640103a6@teranews...
>> >>
>> > I have read a bit about the astronomical image processing mention in
>> > one of the other posts. One thing that came up quite often, is that
>> > CCD sensors are linear. i.e. double the exposure time gives double the
>> > energy recieved. (They do not mention CMOS, but it is probably the
>> > same.)
>> > I agree that blurring occurs when taking a single photo when the
>> > camaera is moved and the exposure is to long. If one can say take in
>> > stead of 1x 2sec exposure, take 20x 0.1 sec exposures, and stack them
>> > into a single picture in software, one should end up close to the
>> > exposure level of the 2sec picture, but without the blur.
>
>I'm assuming that the software wouldn't simply stack the pictures exactly on top
>of each other, but move each one around slightly so as to best align them.
>
>One potential difficulty is the time it takes to read out the data from the
>sensor and "clear" it for the next exposure could be significant, especially for
>very short exposures.  I don't know specifics, but for example, it might take
>3-4 seconds to take your 2 seconds worth of exposure.  Clearly that is not a
>good thing!

Lots of cheap digital cameras can take short mpeg movies, so it cannot
be that slow for the sensor to recover.
If someone has better information regarding the technical specs of
these devices, as well as what the theoretical limits are, it would be
quite nice to know.

>> > In the astronomical case, the "short" exposure pictures are in minutes
>> > for a total exposure time of hours. Of course the subject must not
>> > move during the "short" exposure time.
>
>Another difficulty I see is that when you have a severely underexposed picture,
>you are going to lose a lot of shadow detail since it ends up below the noise
>floor .  A perfectly linear sensor with no noise wouldn't have this problem, but
>real-world ones certainly do.  Consider, for example, a picture that has a
>black-to-white gradient.  When exposed normally, the camera records levels from
>0 (full black) to maximum (full white).  But with an exposure that is 1/20 of
>the proper value, much of the dark gray portion will be rendered as 0 (full
>black). Even after you add the 20 exposures together, you still get full black
>for those dark gray portions.  So everything below a certain dark gray level is
>clipped to full black.

Yes one would need a sensor with a low noise floor. The lower the
better. AFAIK the more pixels they pack into a sensor, the higher the
noise floor. Currently the whole emphasis is on producing sensors with
more pixels. If the emphasis was on producing sensors with very low
noise floor, I am sure a suitable sensor can be developed.

>In astro-photography, this might not be such a problem, as the unlit sky is
>supposed to be black anyway, but with typical photographic scenes, it would be.
>
>> > If one scaled this down so that the "short" exposure time is in
>> > milli-seconds or even micro-seconds depending on what the sensors can
>> > do, then one can do the same thing just at a much faster scale.
>> > What is the minimum exposure time for a current CMOS sensor before one
>> > just see the inherant sensor noise ?
>
>As mentioned above, the shorter you try to make your individual exposures, the
>more the read/clear time becomes a factor.  State-of-the-art high mega-pixel
>cameras today can shoot maybe 8-10 frames/second.  So I doubt the sensors are
>capable with dealing with micro-seconds or even low milli-seconds.  Video
>cameras can of course shoot 30 frames/second, but their sensors are typically
>quite low resolution (i.e. <1 mega-pixel).
>

Yes, but at what exposure time does one currently hit the wall, and
where does the actual theoretical wall lie ? Also if one could cool
down the sensor, by how much would the noise floor be reduced ?

Regards
  Anton Erasmus


Article: 85962
Subject: Re: Idea exploration - Image stabilization by means of software.
From: Piergiorgio Sartor <piergiorgio.sartor@nexgo.REMOVETHIS.de>
Date: Sun, 19 Jun 2005 12:21:00 +0200
Links: << >>  << T >>  << A >>
Kris Neot wrote:
[...]
> How does my idea work?

How do you distinguish between motion due ti shaky hand
and motion due to external "object" motion?

I mean, if a take a picture of an highway, there is a lot
of "shakyness" in the subject already...

bye,

-- 

piergiorgio

Article: 85963
Subject: Re: circuit optimization - a feedbackless machine
From: "valentin tihomirov" <spam@abelectron.com>
Date: Sun, 19 Jun 2005 13:28:37 +0300
Links: << >>  << T >>  << A >>
OK, as muxes are already there we would better exploit em. Do the FPGAs
comprise the prerouted CE signal trace? In fact, I have make some synthesis
experementation and the controlled solution space explorer compromizes only
about 1% of the "free" implementation speed. That's curious.


-- controlled
  elsif Rising_Edge then
      if not DONE then
          STATE <= STATE_NEXT;
      end if;


-- free running
  elsif Rising_Edge then
          STATE <= STATE_NEXT;


How much smaller/faster would the CE-free FPGAs be (control by single
reset)? Any references teaching FPGA design methodologies appreciated.



Article: 85964
Subject: Re: Ideal CPU for FPGA?
From: dave <dave@dave.dave>
Date: Sun, 19 Jun 2005 11:45:37 +0100
Links: << >>  << T >>  << A >>
Philip Freidin wrote:
> On Sun, 19 Jun 2005 01:23:50 +0100, dave <dave@dave.dave> wrote:
> 
> Actually both questions are relevant...
> ....
> ....
> The first CPU in an FPGA (RISC4005/R16) and XSOC/XR16 and NIOS, and
> MicroBlaze all look like this. They all easily outperform legacy
> CPUs implemented in the same FPGAs, because they are tuned to the
> resources that the FPGAs have to offer.
> 
> Philip Freidin
> 
> Philip Freidin
> Fliptronics

Thanks.

Article: 85965
Subject: Re: Ideal CPU for FPGA?
From: dave <dave@dave.dave>
Date: Sun, 19 Jun 2005 11:50:16 +0100
Links: << >>  << T >>  << A >>
JJ wrote:
> Pretty good summary....
>....
> There is only one way that I know to deliver a cpu that runs even
> remotely as fast as a DSP and that means no clock to clock decisions,
> only decisions made over as many clocks as possible such as 4 or 8.
> This leads naturally to threaded designs or barrel engines or in DSP
> speak commutators. The datapath-instruction-fetch=decode is made
> somewhat longer, not just 5 pipelines but more like 15 with 4 threads
> in flight each with a decision to make every 4th clock.
> 
> Such an MT design is more complex in several aspects but much simpler
> in others. This sort of design looks like a DSP filter with some logic
> working effectively at sub clocks. The new complexity lies in the
> barrel control keeping 4 or 8 sets of state in flight but in a mostly
> regular thread independant manner. The simplicity lies in having 4-8
> clocks to actually make a decision for every thread cycle. We will see
> more of this sort of design in ASICs too because there the problem is
> DRAM cycle latencies, the 4 or 8 way designs can effectively make
> memories 4-8 times faster to each thread.
> 
> Now when the same MT approach is applied to the DRAM such as Micron
> RLDRAM which is also effectively 8way threaded, you get an obvious
> match in heaven, 8 instructions can almost be executed with 8 DRAM
> cycles but they actually complete every 8 or more clocks ie 20-25ns
> window. That allows for some radical simplifications too, no data
> cache. Having to use SDRAM fo same task would allow 70ns latency to be
> hidden well, but it has very little bank concurrency so the threads
> would hardly ever be allowed to do ld/st.
> 
> 
> johnjakson at usa dot com
> transputer2 at yahoo
> 

Thanks JJ.

Are there any open implementations that demonstrate the multi threaded 
approach you've mentioned?

P.S.

How is that Transputer going?

Article: 85966
Subject: Re: Ideal CPU for FPGA?
From: dave <dave@dave.dave>
Date: Sun, 19 Jun 2005 11:55:02 +0100
Links: << >>  << T >>  << A >>
dave wrote:
> Philip Freidin wrote:
> 
>> On Sun, 19 Jun 2005 01:23:50 +0100, dave <dave@dave.dave> wrote:
>>
>> Actually both questions are relevant...
>> ....
>> ....
>> The first CPU in an FPGA (RISC4005/R16) and XSOC/XR16 and NIOS, and
>> MicroBlaze all look like this. They all easily outperform legacy
>> CPUs implemented in the same FPGAs, because they are tuned to the
>> resources that the FPGAs have to offer.
>>
>> Philip Freidin
>>
>> Philip Freidin
>> Fliptronics
> 
> 
> Thanks.

BTW: I didn't mean to imply the first paragraphs were not relevant!!!

Article: 85967
Subject: Re: Idea exploration - Image stabilization by means of software.
From: Paul Keinanen <keinanen@sci.fi>
Date: Sun, 19 Jun 2005 14:17:13 +0300
Links: << >>  << T >>  << A >>
On Sun, 19 Jun 2005 00:57:12 -0700, "Jon Harris"
<jon_harrisTIGER@hotmail.com> wrote:

>Another difficulty I see is that when you have a severely underexposed picture,
>you are going to lose a lot of shadow detail since it ends up below the noise
>floor .  A perfectly linear sensor with no noise wouldn't have this problem, but
>real-world ones certainly do.  Consider, for example, a picture that has a
>black-to-white gradient.  When exposed normally, the camera records levels from
>0 (full black) to maximum (full white).  But with an exposure that is 1/20 of
>the proper value, much of the dark gray portion will be rendered as 0 (full
>black). Even after you add the 20 exposures together, you still get full black
>for those dark gray portions.  So everything below a certain dark gray level is
>clipped to full black.

With an ideal noiseless sensor and preamplifier and an A/D converter
this would indeed be a problem. Early digital audio recordings in the
1960/70 suffered from this problem, but by adding dithering noise
before the ADC at about 1 LSB, (low frequency) tones well below 1 LSB
could be recorded.

In a camera system, the sensor thermal noise and the preamplifier
noise acts as the dithering noise and by averaging several samples,
the actual value between two ADC steps can be calculated.

By turning up the preamplifier gain (the ISO setting), you will
certainly get enough dithering noise to record and average levels
below 1 LSB.

Paul


Article: 85968
Subject: Re: Idea exploration - Image stabilization by means of software.
From: Paul Keinanen <keinanen@sci.fi>
Date: Sun, 19 Jun 2005 14:17:14 +0300
Links: << >>  << T >>  << A >>
On Sun, 19 Jun 2005 10:33:58 +0200, Anton Erasmus
<nobody@spam.prevent.net> wrote:

>Yes, but at what exposure time does one currently hit the wall, and
>where does the actual theoretical wall lie ? Also if one could cool
>down the sensor, by how much would the noise floor be reduced ?

The sensor noise is halved at a temperature drop of 6-10 degrees
centigrade at least for CCDs. Current cameras are consuming the
batteries quite fast, which also heats the sensor and thus generating
extra noise.  

With Peltier elements the sensor could be cooled to -40 C, but the
sensor would have to be put into vacuum to avoid condensation
problems.

The photon (Poisson) noise should also be considered, but since the
original aim was to use longer exposure times (allowed by image
stabilisation), this should not be an issue.

Paul


Article: 85969
Subject: Re: Ideal CPU for FPGA?
From: "JJ" <johnjakson@yahoo.com>
Date: 19 Jun 2005 04:18:28 -0700
Links: << >>  << T >>  << A >>
Read up on Suns Niagara and also Raza, he was the architect of the
Athlon and later as a VC helped get Niagara of the ground then did the
same again for the Raza MT Mips arch. These and my MT share quite a few
ideas but I go off in a differnet direction esp with the inverted  MMU
and Transputer stuff.

I don't know of any open source MT designs, perhaps mine will be or
won't. I am sure that if the opensource fans really want to do it, they
could figure it out, but they should target to FPGA and not ASIC, ASIC
perf comes for free after that, not like many open cores ever go to
ASIC. I suspect though that since most comp arch students use the H & P
textbooks that is entirely single threaded, thats all we will see from
students.

At the moment I am spending most of my time on the C compiler just
trying to get the function stuff wrapped up and join up with code
packing. ASAP I go back to getting the ISA simulator to validate what
the compiler will give it then update the RTL code and then the Verilog
and then the memory interface for RLDRAM or atleast a 8 way threaded
BlockRam/SRAM model of it (with artificial 20ns latency). Even my
starter S3 board will be able to model the idea of threaded cpu &
memory even with its meager SRAM.

Since V4 and Webpack 7 have been out, I only today tried to redo the
P/R of the V2P on that, so far results not too good, 320MHz is now
barely 200MHz, more timewasting there. Even V4 doesn't give me anything
yet in perf, will have to redo the flow again but the SW comes 1st.

by for now

JJ


Article: 85970
Subject: Re: Idea exploration - Image stabilization by means of software.
From: Steve Underwood <steveu@dis.org>
Date: Sun, 19 Jun 2005 19:37:06 +0800
Links: << >>  << T >>  << A >>
Paul Keinanen wrote:
> On Sun, 19 Jun 2005 00:57:12 -0700, "Jon Harris"
> <jon_harrisTIGER@hotmail.com> wrote:
> 
> 
>>Another difficulty I see is that when you have a severely underexposed picture,
>>you are going to lose a lot of shadow detail since it ends up below the noise
>>floor .  A perfectly linear sensor with no noise wouldn't have this problem, but
>>real-world ones certainly do.  Consider, for example, a picture that has a
>>black-to-white gradient.  When exposed normally, the camera records levels from
>>0 (full black) to maximum (full white).  But with an exposure that is 1/20 of
>>the proper value, much of the dark gray portion will be rendered as 0 (full
>>black). Even after you add the 20 exposures together, you still get full black
>>for those dark gray portions.  So everything below a certain dark gray level is
>>clipped to full black.
> 
> 
> With an ideal noiseless sensor and preamplifier and an A/D converter
> this would indeed be a problem. Early digital audio recordings in the
> 1960/70 suffered from this problem, but by adding dithering noise
> before the ADC at about 1 LSB, (low frequency) tones well below 1 LSB
> could be recorded.

LOL. That is rather revisionist. Do you know of any converters of that 
vintage which were quiet and linear enough for a lack of dither to have 
been an issue?

> In a camera system, the sensor thermal noise and the preamplifier
> noise acts as the dithering noise and by averaging several samples,
> the actual value between two ADC steps can be calculated.
> 
> By turning up the preamplifier gain (the ISO setting), you will
> certainly get enough dithering noise to record and average levels
> below 1 LSB.

Regards,
Steve

Article: 85971
Subject: Lattice LFEC
From: Jedi <me@aol.com>
Date: Sun, 19 Jun 2005 11:53:05 GMT
Links: << >>  << T >>  << A >>
Hello..

Is this normal that same core which performs well
for Altera Cyclone device can only run at half speed
on a LFEC20-5 device?

Tried with several CPU cores from opencores.org
and LAttice LFEC20 shows mostly half the performance
as Cyclone...


rick

Article: 85972
Subject: globally asyncronous vs locally syncronous?
From: "pasacco" <pasacco@gmail.com>
Date: 19 Jun 2005 05:21:52 -0700
Links: << >>  << T >>  << A >>
Hi

I implemented simple VHDL-written processing node (processor, BRAM,
memory controller and network controller).

And I am trying to connect between 2 processing nodes in fpga, v2pro.

Two processors are directly communicated using handshaking protocol.

Problem is a clock.

When I use all the same clocks, then remote memory access is not okay,
but local memory access is okay.

When I use different clocks (ie, processor and memory - negated clock
each other), then remote memory access is okay, but local memory access
is not okay.

Questions are
- Is it okay if we use all clocks the same? Then it will be
globally/locally synchronous.
- If it is better to use globally asynchrous locally synchronous
(GALS), how can we do that? 

Thankyou for any comment and pointer


Article: 85973
Subject: Spartan 3 availability
From: Mike Harrison <mike@whitewing.co.uk>
Date: Sun, 19 Jun 2005 12:38:05 GMT
Links: << >>  << T >>  << A >>
Further to recent discussiuons here, I Just noticed That S3s have appeared in the Xilinx web store. 
A few are even shown as in stock....


Article: 85974
Subject: Microblaze address space and variables
From: "Marco" <marcotoschi@_no_spam_email.it>
Date: Sun, 19 Jun 2005 14:55:24 +0200
Links: << >>  << T >>  << A >>
Hallo,
I must create a memory for a display lcd. This is part of a microcontroller
based on microblaze.

I thought to do it using a variable, a matrix:  display[X][Y]  (the software
video memory).

This variable should exaclty stay into microblaze address space where I have
mapped a blockram (the hardware video memory) from 0x77200000 to 0x7720FFFF.

In this way, working with my variable I could edit hardware video memory.

But I'm not able... how may I do it?

Many Thanks in Advance
Marco Toschi






Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search