Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 85825

Article: 85825
Subject: Re: uart / Nios2
From: "amko" <sinebrate@yahoo.com>
Date: 16 Jun 2005 15:47:06 -0700
Links: << >>  << T >>  << A >>
You can write mail and explain your problem  to my friend (he is expert
for Nios uros.platise@ijs.si)


Article: 85826
Subject: Re: Where to buy a Xilinx XCR3384XL tq144 CPLD?
From: "Bruno" <bmscc@netcabo.pt>
Date: Thu, 16 Jun 2005 23:57:44 +0100
Links: << >>  << T >>  << A >>
thanks for the link!
Best Regards

"David Brown" <david.brown_spamnot@vertronix.com> escreveu na mensagem
news:OpudnbJkKolY-DLfRVn-vw@adelphia.com...
> Did you try http://www.sierraic.com/ . I never purchased from them, but it
> looks like they may have stock.
>
> I found 34 of the XCR3384XL10TQ144C at this site.
>
> D Brown
>
> "Bruno" <bmscc@netcabo.pt> wrote in message
> news:42af5b7d$0$32020$a729d347@news.telepac.pt...
> > Does anyone know where can I buy a Xilinx XCR3384XL tq144 CPLD?
> >
> > Thanks in advance
> > Best Regards
> > Bruno
> >
> >
>
>



Article: 85827
Subject: Re: JTAG programming: JAM files versus ISC (IEEE1532) files
From: "David Colson" <dscolson@rcn.com>
Date: Thu, 16 Jun 2005 19:02:44 -0400
Links: << >>  << T >>  << A >>
Just so you know, I have used both the Jam and JDrive method
to program a xilinx platform flash. Programming time is more a function
of the device being program then the type of programming file being used.
I experienced no great difference between Jam and JDrive

Dave Colson
<mandana@physics.ubc.ca> wrote in message 
news:1118944242.773340.239160@f14g2000cwb.googlegroups.com...
> Thanks for the reply.
>
> I downloaded the Jam Player code, not the greatest code. but there is
> no document detailing the STAPL format.
> What I liked about 1532 was the dedicated ISP commands that would
> provide almost direct access to the flash. But I'm afraid that with
> STAPL I may have to go through the whole boundary scan chain and I can
> see how that needs lots of memory.
>
> Thanks,
> m
> 



Article: 85828
Subject: Re: Availability of Spartan3
From: "xilinx_user" <barrinst@ix.netcom.com>
Date: 16 Jun 2005 16:07:46 -0700
Links: << >>  << T >>  << A >>
I called NuHorizon today - at their 800 number -and was told that there
was an order placed June 9 with expected delivery August 30th. This was
using the added suffix suggested by Steve Knapp and Peter Alfe.

My interpretation of this is that there are, in effect, virtually no
parts available for customers like me who only order 35-50 at a time.

I will look forward to Peter Alfke working the system inside Xilinx.
Hopefully he will accomplish something and have better news to report.

The net result of this is that my product is in jeopardy. I've used
Xilinx parts for nearly 15 years and would hate to move over to Altera,
but I am getting fearful I might have to consider that in order to
protect the investment my company has in its product development.

I think Xilinx is a great company, so I am really hoping there is news
fairly soon to restore my confidence.


John_H wrote:
> "User" <tech3@cavelle.com> wrote in message
> news:1118958956.419423.83120@g14g2000cwa.googlegroups.com...
> > I heard back from the major distributor today, who said that there are
> > indeed no parts available for prototype / pre-production and certainly
> > not larger quantities.  They thought there might be a possibility of
> > obtaining something in an industrial temperature grade, but otherwise
> > nothing, magic 300mm suffix or no magic suffix.  Otherwise there
> > appears to be nothing anywhere.
>
> <snip>
>
> Could you mention for those in-the-know (since it's not in this thread):
>   1) what part, package, and speed grade
>   2) which distributor is giving you the info (city may help)
>
> I love to see things get solved but I don't like to see people get upset
> without providing the information needed to help them through the "side
> channels" here on the newsgroup.  If there's bad info getting around, that
> info should be squashed.


Article: 85829
Subject: Re: Availability of Spartan3
From: "John_H" <johnhandwork@mail.com>
Date: Thu, 16 Jun 2005 23:40:24 GMT
Links: << >>  << T >>  << A >>
So, once again:

> > Could you mention for those in-the-know (since it's not in this thread):
> >   1) what part, package, and speed grade

(and lead-free or not)



Article: 85830
Subject: Re: Availability of Spartan3
From: "xilinx_user" <barrinst@ix.netcom.com>
Date: 16 Jun 2005 17:03:16 -0700
Links: << >>  << T >>  << A >>
I gave the following part #: XC3S200-4PQ208C0974, since this is listed
on the website.

Since there is no letter "G" in the part number, this is the leaded
version.


Article: 85831
Subject: Automagic Circuit Pipelining (was: Re: Auto pipeline logic?? )
From: John McCluskey <john_mccluskey@hotmail.com>
Date: Thu, 16 Jun 2005 20:16:17 -0400
Links: << >>  << T >>  << A >>
On Thu, 16 Jun 2005 09:05:12 +0100, Ben Jones wrote:

>> : Well, inserting registers changes your design in a fundamental way.
>> : Most circuits I can think of would just stop working if you added
>> : registers to them at random. Only you, the designer, know exactly
>> : how much pipelining it is legal to apply to a given part of your
>> : circuit. So I don't believe such a tool exists - certainly not in the
>> : general case.
>> This is very true, but there's no reason a designer couldn't specify a
>> bunch of signals (e.g. the data signal from a combinatorial multiply and
>> associated control signals) and some tool would add aribtrary (to a user
>> specified limit) stages of pipelining to all signals to meet timing, with
>> logic/register shuffling.  This would only work the control and data flows
>> can be aribtrarily pipelined, but many ops can be described this way/
> 
> True, although I don't see much merit in doing it that way. In FPGAs, the
> pipeline registers are essentially free (because they're there after every
> LUT, even if you don't use them). So you don't get much advantage from
> "just" meeting timing - if you have four clock cycles do do something in,
> then you might as well take all four - who cares? You'll get better results
> out of the tools that way, too.
> 
> 
> Cheers,
> 
>         -Ben-
> 

I've tried to write my code this way for some time now.  This is a design
style which is pretty hard.  You start with an algorithm or computation,
and apply it to an arbitrary sized chunk of data, and build the circuit so
that registers are inserted at "appropriate" points during elaboration of
the structure.   It would be easier if the VHDL language standard was
modified to support "return generic values" that are computed during
component elaboration and returned as a compile time constant to the upper
level code that instantiates the component.   The reason for this, of
course, is that the lower level component should contain code to calculate
the appropriate latency, and then return this value to the upper level of
the hierarchy so that the other signal paths can have their latency
balanced.   

Since VHDL doesn't let you do this, the only other solution I can think of
is to write functions in a package that perform the latency calculation at
the top level, and then pass the latency as a parameter to the lower level
components.  This works OK, but requires laborious latency calculations at
each level of the hierarchy, making it difficult to cleanly separate the
functionality of the sub-components.   But you *can* do it.  There are
some types of circuits where it's really tough to calculate the latency
based on the circuit parameters.  CRC's are a good example of this.  The
number of levels of logic generated by a parity matrix depends on the
input data width, as well as the polynomial being used.   Actually, now
that I think about it, the number of levels of logic is proportional to
the log2 of the maximum hamming weight of the columns.   OK, bad example :-)

Still, if you write a component with a generic like so:

generic(  delay : natural )

and then actually implement that latency in the component, it's pretty
easy after that to get the component to run at the silicon limit for your
target FPGA technology (given enough latency, of course).   It's a pain to
have to write the functions that calculate exactly where to place the
registers, since you have to take the latency as a given and put it where
it will do the most good.   I usually put the first register at the
output, the 2nd register at the input, and 3rd and higher registers in the
middle.

 In some cases you have to depend on the retiming capability of the
synthesis tool to put the registers where they are needed. This is usually
because the actual net where it needs to go doesn't exist until the
component is elaborated. Oh yeah, now I remember...  I have this problem
with a CRC_ECC component that does single error correction on input data
protected by a CRC.   The error syndromes turn out to be a big static
table with unpredictable values.  The circuitry for recognizing the
syndromes has *very* unpredictable levels of logic.  This can't be
pipelined in the source, because the nets don't exist until elaboration,
and even then they don't have names you can access.  Retiming is the only
solution to getting decent speed with a circuit like this.  Even with
retiming, I still have to guess at an appropriate value for latency.  In
the future, we might solve this by iterative synthesis that sets a latency
or logic level attribute on the component label, so that during the 2nd
iteration of synthesis, we can detect situations like this and add
appropriate latency.  But this will require a language change, as usual :-(

Here's a tip on writing components with variable latency:

First, create a set of delay elements (registers) that take a delay
parameter for various types.  I use:

std_logic, std_logic_vector, unsigned, signed.   

I use the step function extensively to distribute delays in a component.  

function step_function( i, j: integer ) return natural is
 begin
 if i >= j then
   return 1;
 else
   return 0;
end step_function;

then given a generic delay value, I can compute the delays for various
points like so:

constant bottom_delay : natural := step_function( delay, 1);
constant top_delay : natural := step_function( delay - bottom_delay, 1 );
constant middle_delay : natural := delay - top_delay - bottom_delay;

This puts the first available register at the bottom, the 2nd goes to the
top, and the remaining in the middle.  The delays can never be negative.

It's not uncommon to use one of these quantities for passing latency to
subcomponents.  The middle delay might be used as a delay parameter, for
example.  In some rare cases, you may have different latencies on
different inputs or outputs.  This depends on the demands of the higher
level hierarchy.   

Sometimes the latency is just obvious, and all you have to do is write a
function that returns the value based on some input parameters.  Recursive
trees fall into this category.   This can be a little tricky sometimes,
and having a way to return an elaborated value from a subcomponent would
be really handy here.  Of course, having this capability would open the
door to dependency loops in elaboration, so iteration limits might have to
be set.  

Having said all this, I'll close with the assertion that I can see the day
coming when it won't be just the logic that needs pipelining... Someday
we'll have to pipeline the routing too.  Probably in the 45 nm node.

John McCluskey

P.S. I'm a Xilinx FAE, but writing this in my "off hours".





Article: 85832
Subject: Re: Idea exploration - Image stabilization by means of software.
From: "Kris Neot" <Kris.Neot@hotmail.com>
Date: Fri, 17 Jun 2005 09:57:12 +0800
Links: << >>  << T >>  << A >>
Correct, my idea is much liks multiple short time exposure then
combine them while taking into account of the displacement. If
similar techniques have been done in the telescopes, then it is
a baked bean. No need to explore it further.



"Paul Keinanen" <keinanen@sci.fi> wrote in message
news:sjh3b15dd6l1ppd9i4oilql87nlfis7mpo@4ax.com...
> On Thu, 16 Jun 2005 08:58:16 -0500, Chuck Dillon <spam@nimblegen.com>
> wrote:
>
> >I believe he's talking about a single frame exposure not video.  He's
> >talking about motion during the exposure and I assume a problem of
> >varying affects of the motion on the different sensors (e.g. R,G and B)
> >that combine to give the final image.
>
> If the end product is a single frame, what prevents from taking
> multiple exposures and combine the shifted exposures into a single
> frame ?
>
> In ground based optical astronomy, the telescope is stable, but the
> atmospheric turbulence deflect the light rays quite frequently to
> slightly different directions. Taking a long exposure would cause
> quite blurred pictures. Take multiple short exposures, shift each
> exposure according to the atmospheric deflection and combine the
> exposures. Look for "stacking" and "speckle-interferometer".
>
> To do the image stabilisation in a camera, the MPEG motion vectors
> could be used to determine the global movement between exposures, but
> unfortunately getting reliable motion vectors from low SNR exposures
> is quite challenging. Of course if 3 axis velocity sensors are
> available, these can be used to get rid of any ambiguous motion
> vectors and also simplify searching for motion vectors.
>
> With reliable motion vectors for these multiple low SNR exposures, the
> exposures can be shifted accordingly, before summing the exposures to
> a single high SNR frame.
>
> Paul
>



Article: 85833
Subject: Re: BGA Rework/Prototype Placement Anyone?
From: "Jason Berringer" <jberringer.at@sympatico.dot.ca>
Date: Thu, 16 Jun 2005 21:58:28 -0400
Links: << >>  << T >>  << A >>
Circuit Centre Inc.

http://www.circuitcentre.com/index2.html

We get BGA placed for 75$ a chip/board (Canadian) and they are located in
Scarborough (East of Toronto). They have great customer service, and care
about the little guys.

Hope this helps,

Jason

"James Morrison" <spam1@stratforddigital.ca> wrote in message
news:p5ise.10538$5u4.34021@nnrp1.uunet.ca...
> (This was originally posted on sci.electronics.design and was intended
> to be cross-posted to these groups but that got missed before sending.
> So here it is posted to these groups but not sci.electronics.design--if
> you like please cross-post your reply to that newsgroup.  Sorry for the
> inconvenience).
>
> -------------
>
> Hello,
>
> Does anyone else find themselves in the position I often find myself in:
> It is a royal pain to get prototype quantities populated with BGA
> components.  If an assembly house has BGA machinery they are typically
> too big to care about the little guy like me who doesn't need all that
> many boards assembled but is willing to pay for the service for a few
> boards.
>
> For the first few boards I often like to populate in blocks and do tests
> at each stage so if there is a problem it is much easier to isolate the
> problem.  If you populate the whole board and then find a problem (if
> you can) fixing it often means removing components that you've already
> populated.
>
> It would also be very expensive to populate a small run using an
> SMT-line and then find out that the power supply is going to blow up
> parts (especially a worry if you have a boost power supply).
>
> Does anyone know of anywhere that will take a few boards and populate a
> few BGA's by hand?  In Canada?  In southern Onatrio?
>
> If there is nothing out there, is anyone interested in this type of
> service.  I need this service so I was considering purchasing a rework
> station and then making the service available for a fee to pay for the
> station and to provide a service to the design world.  Would anyone use
> this?
>
> Thanks for your input,
>
> James.
>
>
>



Article: 85834
Subject: Idea exploration 1.1 - Inertia based angular sensor.
From: "Kris Neot" <Kris.Neot@hotmail.com>
Date: Fri, 17 Jun 2005 10:07:50 +0800
Links: << >>  << T >>  << A >>
This idea is used to serve my old idea of "Image stabilization by means of
software.".
I was aware that it was difficult to find angular sensor that can run at
1MS/s. I will
have a cubic enclosure, two perpendicular walls are made of small/fast image
sensors.
I use a hanging ball and a laser to shine upon it. The image sensors will
detect the
exact location of the ball (hopefully 1000 times a second). When the
enclosure(thus
camera body) shakes, the ball will remain inert for that short period, Thus
the image
sensors can give a reading of the balls location and calculate the
displacement.

Does this idea work? :)




Article: 85835
Subject: what's my problem with downloading?
From: "Qi Sun" <qisun@NOSPAM_itee.uq.edu.au>
Date: Fri, 17 Jun 2005 12:11:01 +1000
Links: << >>  << T >>  << A >>
Hello!
I am use Xilinx Platform Studio to download the bitstream of an example into
ML401 board. I follow the steps which the documentation shows:
1   generate netlist
2  generate bitstream
3  connect PC to ML401 board with a PC4 cable, switch on the power on the
board
4  Tools -> download within XPS
5   Tools -> XMD
6   dow microblaze_0/code/...
7   run

But  the screen of computer shows: cannot find the .elf file.  But, I
checked the *.elf is there.
Anyone know what the problem is?

Thank!




Article: 85836
Subject: Re: Availability of Spartan3
From: "Leon" <leon_heller@hotmail.com>
Date: 16 Jun 2005 19:25:23 -0700
Links: << >>  << T >>  << A >>
Altera isn't all that good at supplying their devices in prototype
quantities. If you look at their web site and see what is actually
available for purchase, there isn't very much there.

Leon


Article: 85837
Subject: Re: Idea exploration 1.1 - Inertia based angular sensor.
From: Jeremy Stringer <jeremy@_NO_MORE_SPAM_endace.com>
Date: Fri, 17 Jun 2005 14:56:39 +1200
Links: << >>  << T >>  << A >>
Kris Neot wrote:
> This idea is used to serve my old idea of "Image stabilization by means of
> software.".
> I was aware that it was difficult to find angular sensor that can run at

Just a quick point - is this discussion really relevant to:
a) comp.arch.fpga
b) comp.arch.embedded
c) comp.dsp
d) sci.image.processing
?

Maybe if you were making the application work on an fpga with an 
embedded processor, using a DSP algorithm based on images of a small 
ball, but...

Jeremy

Article: 85838
Subject: Re: Idea exploration - Image stabilization by means of software.
From: "Kris Neot" <Kris.Neot@hotmail.com>
Date: Fri, 17 Jun 2005 11:14:34 +0800
Links: << >>  << T >>  << A >>
Found a flaw in my idea after a little test.

My deer is a perfectly exposed picture and my trick is applied to it,
I ended up with a very dull pic, thus, I believe, with a picture that
has motion blur, the outcome will be even more worse after application
of my idea. To improve my idea, I need the image sensor to run at
1000fps, which is impossible.

a = imread('deer.jpg'); % size = (864,576,3)
b = int32(a);
c = zeros(880, 585, 3);
cz = zeros(880, 585, 3);
d = [5,3; 8,3; 2,5; 3,7; 5,1; 2,6; 4,4; 2,7];

for i = 1:1:8
    t = cz;
    t(d(i,1):(d(i,1)+864-1), d(i,2):(d(i,2)+576-1), 1:3) = a;
    c = c + t;
    fprintf('done with %d\n', i);
end
c = c / 8;
c = uint8(c);

figure(1);
image(a);
figure(2);
image(c);





"Kris Neot" <Kris.Neot@hotmail.com> wrote in message
news:42b15483$1@news.starhub.net.sg...
> Basically let the camera take it's picture though user's hand is shaky,
and
> use a
> mechanism to record down the angle of user's handshake in steps of 1us,
now
> I get V = [v0, v1, ...v(n-1)]; With this vector, I calculate displacement
> vector
> the image has moved in terms of number of pixels P.
>
> Assume the image sensor's voltage rises linearly w.r.t exposure time (or
any
> charactized waveform).
>
> I slice the image into n slices by dividing each pixel's RGB data into
RGB/n
> (depending on the waveform of the sensor's characters), and use a
technique
> similar to the motion compensation, move the slices according to
> displacement
> vector P and add them together to reconstruct final image.
>
> How does my idea work?
>
>
>



Article: 85839
Subject: Re: Idea exploration 1.1 - Inertia based angular sensor.
From: "Kris Neot" <Kris.Neot@hotmail.com>
Date: Fri, 17 Jun 2005 11:15:58 +0800
Links: << >>  << T >>  << A >>
> Just a quick point - is this discussion really relevant to:
> a) comp.arch.fpga
> b) comp.arch.embedded
> c) comp.dsp
> d) sci.image.processing
> ?
>

More groups, more replies. :)




Article: 85840
Subject: Re: Idea exploration 1.1 - Inertia based angular sensor.
From: "Rimmer" <rimmerx@reddwarf2.com>
Date: Fri, 17 Jun 2005 15:18:46 +1200
Links: << >>  << T >>  << A >>

"Jeremy Stringer" <jeremy@_NO_MORE_SPAM_endace.com> wrote in message
news:42b23c62$1@clear.net.nz...
> Kris Neot wrote:
> > This idea is used to serve my old idea of "Image stabilization by means
of
> > software.".
> > I was aware that it was difficult to find angular sensor that can run at
>
> Just a quick point - is this discussion really relevant to:
> a) comp.arch.fpga
> b) comp.arch.embedded
> c) comp.dsp
> d) sci.image.processing
> ?
>
> Maybe if you were making the application work on an fpga with an
> embedded processor, using a DSP algorithm based on images of a small
> ball, but...
>
> Jeremy
Yes and piss off and winge elsewhere...

Rimmer



Article: 85841
Subject: Xilinx Spartan 3 DCI Power Consumption
From: James Morrison <spam1@stratforddigital.ca>
Date: Thu, 16 Jun 2005 23:29:49 -0400
Links: << >>  << T >>  << A >>
Hello everyone,

Does anyone have any good numbers for the power consumption used by the
DCI circuitry on the I/O's?  The data sheet says, and I quote, "more
power".  I kid you not--look it up.

There is a bit more info in a solution record:

http://www.xilinx.com/xlnx/xil_ans_display.jsp?iLanguageID=1&iCountryID=1&getPagePath=19628

But this looks like they came up with the numbers the same way I did in
my lab.  I would expect Xilinx to know this information since they
designed the chip and be able to give some bounds on this number and
some equations as to how to calculate it.  Anyone can measure the extra
current with DCI enabled and disabled as I did.

BTW, I measured 400mA more current on the 2.5V rail when DCI was enabled
as to when it wasn't.  I have 7 receivers spread over 3 banks.

James.


Article: 85842
Subject: Re: LUT, how to?
From: Matthieu MICHON <matthieu.michonRemove@laposte.net>
Date: Thu, 16 Jun 2005 20:55:44 -0700
Links: << >>  << T >>  << A >>
Giox wrote:
> can you provide me some reference to the documentation that documents
> that?
> Moreover a LUT of 256 byte has an acceptable size or is too big?
> I know this is a stupid question, but I'm a newbye
> Thanks a lot
> 

Hi


I may be mistaken but, from where I sit, it seems that you are trying to 
implement a a 8:8 LUT (8-bit wide input, 8-bit wide output) ?  If this 
is the case, I guess that a simple 256 bytes ROM would do the job.

Article: 85843
Subject: Re: NIOS2 exceptions...
From: "James Ball" <jalobaba@yahoo.com>
Date: Fri, 17 Jun 2005 04:07:49 GMT
Links: << >>  << T >>  << A >>
Yes, this is correct. The pc+4 of the interrupted instruction is saved by 
Nios II on an interrupt.
So, you need to subtract 4 from this value to return back to the interrupted 
instruction and continue on.

+james+

"Jedi" <me@aol.com> wrote in message news:0y%re.346$5v6.334@read3.inet.fi...
> Evening...
>
>
> Seen in u-boot source code that before returning from an interrupt
> the return address is adjusted by "-4"...
>
> And interestingly this needs to be done although the NIOS2
> datasheets says that the address of the next instruction
> is saved...
>
>
> thanx in advance
> rick 



Article: 85844
Subject: Re: Availability of Spartan3
From: rickystickyrick@hotmail.com
Date: 16 Jun 2005 22:04:52 -0700
Links: << >>  << T >>  << A >>


Don't worry xilinx_user, there aren't parts available for people who
need 1000's either :-)

Ricky.


Article: 85845
Subject: Re: Deisgn partitioning issues
From: "Neo" <zingafriend@yahoo.com>
Date: 16 Jun 2005 22:06:05 -0700
Links: << >>  << T >>  << A >>
So basically it all points to ease of use and functionality.
thanks for your answers


Article: 85846
Subject: Re: Idea exploration - Image stabilization by means of software.
From: "Jon Harris" <jon_harrisTIGER@hotmail.com>
Date: Thu, 16 Jun 2005 22:08:13 -0700
Links: << >>  << T >>  << A >>
Are you taking pictures of stationary things from a stationary camera?  That's
what the astronomy case involves.  But since you are concerned with a moving
camera, I assumed you were talking about pictures taken with a hand-held camera.
In that case, the reason the shake creates blur is that the exposure is too
long.  And the reason for the long exposure is lack of light (due to lens
limitations, ambient light conditions, or both).  If you were to shorted the
exposure, you would end up with a picture that is too dark.

"Kris Neot" <Kris.Neot@hotmail.com> wrote in message
news:42b22b58@news.starhub.net.sg...
> Correct, my idea is much liks multiple short time exposure then
> combine them while taking into account of the displacement. If
> similar techniques have been done in the telescopes, then it is
> a baked bean. No need to explore it further.
>
>
>
> "Paul Keinanen" <keinanen@sci.fi> wrote in message
> news:sjh3b15dd6l1ppd9i4oilql87nlfis7mpo@4ax.com...
> > On Thu, 16 Jun 2005 08:58:16 -0500, Chuck Dillon <spam@nimblegen.com>
> > wrote:
> >
> > >I believe he's talking about a single frame exposure not video.  He's
> > >talking about motion during the exposure and I assume a problem of
> > >varying affects of the motion on the different sensors (e.g. R,G and B)
> > >that combine to give the final image.
> >
> > If the end product is a single frame, what prevents from taking
> > multiple exposures and combine the shifted exposures into a single
> > frame ?
> >
> > In ground based optical astronomy, the telescope is stable, but the
> > atmospheric turbulence deflect the light rays quite frequently to
> > slightly different directions. Taking a long exposure would cause
> > quite blurred pictures. Take multiple short exposures, shift each
> > exposure according to the atmospheric deflection and combine the
> > exposures. Look for "stacking" and "speckle-interferometer".
> >
> > To do the image stabilisation in a camera, the MPEG motion vectors
> > could be used to determine the global movement between exposures, but
> > unfortunately getting reliable motion vectors from low SNR exposures
> > is quite challenging. Of course if 3 axis velocity sensors are
> > available, these can be used to get rid of any ambiguous motion
> > vectors and also simplify searching for motion vectors.
> >
> > With reliable motion vectors for these multiple low SNR exposures, the
> > exposures can be shifted accordingly, before summing the exposures to
> > a single high SNR frame.
> >
> > Paul



Article: 85847
Subject: Re: Idea exploration - Image stabilization by means of software.
From: "Kris Neot" <Kris.Neot@hotmail.com>
Date: Fri, 17 Jun 2005 13:31:38 +0800
Links: << >>  << T >>  << A >>
"Jon Harris" <jon_harrisTIGER@hotmail.com> wrote in message
news:11b4mltr031dld2@corp.supernews.com...
> Are you taking pictures of stationary things from a stationary camera?
That's
> what the astronomy case involves.  But since you are concerned with a
moving
> camera, I assumed you were talking about pictures taken with a hand-held
camera.
> In that case, the reason the shake creates blur is that the exposure is
too
> long.  And the reason for the long exposure is lack of light (due to lens
> limitations, ambient light conditions, or both).  If you were to shorted
the
> exposure, you would end up with a picture that is too dark.
>

Yeah, now I understand. My idea is flawed.



Article: 85848
Subject: USB2.0 UTMI Free IP Core Implentation
From: sayskimariano@yahoo.fr
Date: 16 Jun 2005 23:03:08 -0700
Links: << >>  << T >>  << A >>
Hello,

I search some informations about the Free IP core from opencore.org. I
would like to implement it on my FPGA Spartan 3 but I have not enough
information.
 * How and when do you initialize it for the USB2.0 setup (VID, PID,
...)
 * Anyone have an example to use it?

Thank for response.

Sayski


Article: 85849
Subject: Need application note for Motion controller with Xilinx
From: "Leeinhyuk" <engtech1@kornet.net>
Date: Fri, 17 Jun 2005 15:22:09 +0900
Links: << >>  << T >>  << A >>
Hello sir
This is Seoul Korea. I'm Mr. Lee.
The reason that I'm writing letter is regarding for Application note for
Motion Controller with CPLD chip.
I fail to find documents in your website.
Could you send me design examples or Application note for conventional
Motion controller chip with Xilinx CPLD?
My email address is leeih@chollian.net
best regards
IH Lee






Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search