Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Martin- From your constraints I still don't see why frame buffers are out of the question (genlock devices do overlay all the time -- with frame buffers), but for the sake of argument this basically means that you need a hit/miss engine for every graphics primitive that exists. It would basically be a subroutine that operates on the coordinate and returns a boolean -- pixel on or off. But I can't see how to do this without one engine per primitive (or multiplexing a single engine). Jake "Martin Euredjian" <0_0_0_0_@pacbell.net> wrote in message news:<AIRcb.6331$0K3.502@newssvr27.news.prodigy.com>... > "Bob Feng" wrote: > > Your comments after your question do not make sense to me. > > You are thinking computers. I'm thinking video. > > Here's one possible scenario. You are required to overlay graphics > (primitives: lines, circles, etc.) and text onto an incoming video feed, > which is then output in the same format. The allowable input to output > delay is in the order of just a few clocks --if that much at all-- not > frames, not lines, a few clocks at best. > > Frame buffers are out of the question, of course. In addition to this, due > to cost constraints, you are not allowed to have rendering memory for the > graphics overlay. You, therefore, must render text and graphics on the fly, > in real time, as the signal flows through. > > Text and horizontal/vertical lines are pretty easy to deal with. You start > getting into circles and rotated lines or polygons and it gets interesting > real fast. In contrast to this, rendering these primitives to a frame > buffer (in a "traditional" computer-type application) is a no-brainer. > > > Usaually a frame buffer is where you render. And then it goes through a > DAC > > and several video timing control units and reaches a CRT via a VGA or 13w3 > > connector. > > So, in the above context, nothing goes through a DAC. The video feed > (assume analog) simply goes through some analog switches that, under FPGA > control, select, on a pixel-by-pixel basis, from among the incoming video > signal and a set of pre-established analog values (say white, to keep it > simple). > > > Hope this example clarifies it for you.Article: 61051
Hello Martin, How do you do it? Must be an interesting project. In the past I had something similar, an animation video generator for digital video system test, but I deal with lines, text, and rectangle only. For curve and circle, I think a sin()/cos() LUT (or coregen) is needed, no big deal isn't it?Article: 61052
the same with me, my inbox is pounded with bulk mail and i exceed storage in 20 min. i lost my id too Is there any soln to get away from this ramArticle: 61053
Hello, I'm using dcm's in a V2 design and I noticed that there is a Clocking Wizard in the new 6.1 tools. It looks useful because it allows you to go through the options to generate a correctly generic'ed and attributed dcm instantiation. The online documentation says that it will generate an hdl module that can be inserted into a design but I get no hdl when I run the wizard. Is there a trick to get hdl from the Clocking Wizard? RegardsArticle: 61054
Greetings, I am new to hardware design and hoping to get a reality check on building a lexical analyzer and parser using FPGAs. I can see the following options. 1. A hardwired implementation of some or all of lexer&parser to maximize the performance. 2. Build a CPU with instructions optimized for interpreting parse tables and drive it with the output of a parser generator similar to YACC. 3. Have a RISC-like CPU implemented on the FPGA executing a parser. I see no advantage to option 3 over doing the same thing on a general purpose CPU. I don't know if the second option has any potential for increased performance. As far as the first option, I am wondering if the FPGAs currently available have the capacity to implement the hardwired parser for a language of low to moderate complexity and how hard it might be model it using a language like VHDL. Please post any thoughts/comments/pointers about the various options. ThanksArticle: 61055
Woops! I see now how to do it. When the Clocking Wizard source is selected within the Project Navigator there are process options to View HDL and View HDL Instantiation Template. "Pete Dudley" <pete.dudley@comcast.net> wrote in message news:cn-dnQlWD6yfNumiU-KYvQ@comcast.com... > Hello, > > I'm using dcm's in a V2 design and I noticed that there is a Clocking Wizard > in the new 6.1 tools. It looks useful because it allows you to go through > the options to generate a correctly generic'ed and attributed dcm > instantiation. The online documentation says that it will generate an hdl > module that can be inserted into a design but I get no hdl when I run the > wizard. > > Is there a trick to get hdl from the Clocking Wizard? > > Regards > >Article: 61056
I think I have a significantly simpler implementation of a Gray counter: Start with a plain-vanilla binary counter. Add to it a register of equal length, where the Gray count value will appear. Drive each Di input of the Gray register with the XOR of the binary counter's Di and Di+1 inputs, since any Gray bit is the XOR of the two (equivalent and equivalent+1) binary bits. The subtle trick is to not use the binary Q outputs, but rather their D inputs to drive the Gray register, which avoids having the Gray counter trail the binary counter. (Good when the counter has a CE input). Peter Alfke, Xilinx Applications ========================= Bob Perlman wrote: > > On 26 Sep 2003 06:08:08 -0700, dgleeson-2@utvinternet.com (Denis > Gleeson) wrote: > > >Ok I was confused. > > > >I did implement a synchronous counter as everybody points out. > >The glitches are due to routing delays in the FPGA. > > > >Now Im not confused any more. > > > >All I need now is to find verilog for a 16 bit gray code counter. > > Try this. Credit where credit is due: as noted in the comments, the > gray code calculation comes from a paper by Cliff Cummings. > > Bob Perlman > Cambrian Design Works > > //=============================================================== > // Module > //=============================================================== > // > // Module: gray_ctr_module.v > // > // Author: Bob Perlman > // > // Description: The Verilog code for a gray-code counter > // incorporates a gray-to-binary converter, a binary- > // to-gray converter and increments the binary value > // between conversions. > // > // This code was based on Cliff Cummings' 2001 SNUG paper, > // "Synthesis and Scripting Techniques for Designing > // Multi-Asynchronous Clock Designs." > // > // Revision history: > // Rev. Date Modification > // ---- -------- > ---------------------------------------------- > // - 12/29/01 Baseline version > > `timescale 1 ns / 1 ps > module gray_ctr_module > (// Outputs > gray_ctr, > // Inputs > inc_en, preload_en, preload_val, clk, global_async_reset > ); > > //================== > // Parameters > //================== > > parameter SIZE = 4; > > //================== > // Port declarations > //================== > > output [SIZE-1:0] gray_ctr; > > input inc_en; > input preload_en; > input [SIZE-1:0] preload_val; > input clk; > input global_async_reset; // Global asynchronous reset > > //==================================================== > // Reg, wire, integer, task, and function declarations > //==================================================== > > reg [SIZE-1:0] gray_plus_1, gray_ctr, bnext, bin; > integer i; > > //================= > // Logic > //================= > > always @(posedge clk or posedge global_async_reset) > gray_ctr <= #1 global_async_reset ? 0 : > preload_en ? preload_val : > inc_en ? gray_plus_1 : > gray_ctr; > > // Calculate next gray code by: > // 1) Converting current gray counter value to binary > // 2) Incrementing the binary value > // 3) Converting the incremented binary value to gray code > > always @(gray_ctr or inc_en) begin > for (i=0; i<SIZE; i=i+1) > bin[i] = ^(gray_ctr >> i); > bnext = bin + 1; > gray_plus_1 = (bnext>>1) ^ bnext; > end > > endmoduleArticle: 61057
In article <6ae7649c.0309261310.11abb60b@posting.google.com>, Ekalavya Nishada <enishada@yahoo.com> wrote: >Greetings, > >I am new to hardware design and hoping to get a reality check on >building a lexical analyzer and parser using FPGAs. I can see the >following options. > >1. A hardwired implementation of some or all of lexer&parser to >maximize the performance. > >2. Build a CPU with instructions optimized for interpreting parse >tables and drive it with the output of a parser generator similar to >YACC. > >3. Have a RISC-like CPU implemented on the FPGA executing a parser. > >I see no advantage to option 3 over doing the same thing on a general >purpose CPU. I don't know if the second option has any potential for >increased performance. As far as the first option, I am wondering if >the FPGAs currently available have the capacity to implement the >hardwired parser for a language of low to moderate complexity and how >hard it might be model it using a language like VHDL. > >Please post any thoughts/comments/pointers about the various options. > Well, it's an ambitious project, anyway.... So are you trying to have a hardcoded interpreter? From what I think you're saying, you want to build this parser and then (I'm guessing) you want to either produce some bytecode for a VM (or in this case a real machine (RM) implemented in the FPGA) or build some kind of AST and walk it in your FPGA (?). Otherwise I can't see how it makes sense to just have a parser in an FPGA. Fist off, what's the all-fire hurry? Parsing a simple language is pretty quick as is in software. It would be _lot_ easier to implement your CPU in the FPGA and then use software to parse and compile the frontend language into machine code that runs on your CPU. Seconly, assuming that I'm guessing wrong and you just want a parser for a programming language implemented in an FPGA: I think this could be pretty difficult, but perhaps not impossible especially if you have some large amount of memory available either inside of or external to your FPGA. You'd have a bytestream coming in which represents characters of your tokens, a tokenizer (a state machine) and then another big state machine that implements your parser. But again, after you've parsed this language, what do you intend to do with it? PhilArticle: 61058
What about the case where sync_reset=0 and clk_ena=0? Your code doesn't describe the desired behavior for this case. Jake "MM" <mbmsv@yahoo.com> wrote in message news:<bl1i00$76rai$1@ID-204311.news.uni-berlin.de>... > process(clk) > begin > if rising_edge (clk) then > if sync_reset='1' then > outf <= '0'; > elsif clk_ena='1' then > outf <= '1'; > end if; > end if; > end process; > > Thanks, > /MikhailArticle: 61059
Dear All, Problem. I've just been let down by a oscillator manufacturer. They can only make the ordered 3.3V differential LVPECL oscillator parts work at 5V. Some excuse about their quartz supplier. So, I can't stick 5V PECL into my 3.3V Virtex-E differential input, it's outside the common mode range. So, I could AC couple it with a couple of caps after the PECL driver's emitter resistors. I then need to bias the signal into the common mode range of the VirtexE diff input. Question. Anybody know if I could somehow activate the internal pullup resistor on one input and the pulldown on the other to bias the signal in the middle of the supply? There's already a 100 ohm termination resistor between the pins. Or, any better ideas? cheers all, Syms. p.s. I know I could use more resistors to do this biasing, but the board layout makes this awkward. The VirtexE is, of course, a BGA.Article: 61060
Hi - On Fri, 26 Sep 2003 14:26:12 -0700, Peter Alfke <peter@xilinx.com> wrote: >I think I have a significantly simpler implementation of a Gray counter: > >Start with a plain-vanilla binary counter. >Add to it a register of equal length, where the Gray count value will appear. >Drive each Di input of the Gray register with the XOR >of the binary counter's Di and Di+1 inputs, since any Gray bit is the >XOR of the two (equivalent and equivalent+1) binary bits. >The subtle trick is to not use the binary Q outputs, but rather their D >inputs to drive the Gray register, which avoids having the Gray counter >trail the binary counter. (Good when the counter has a CE input). > >Peter Alfke, Xilinx Applications >========================= If you want the best speed and have the extra (~2X) FFs to spare, Peter's is the better solution, at least for longer counters. Here's the Verilog code for the implementation. I've synthesized but not simulated it, so I can't promise that it works. Hoping that this wasn't a homework problem, Bob Perlman Cambrian Design Works //=============================================================== // Module //=============================================================== // // Module: gray_ctr_module.v // `timescale 1 ns / 1 ps module gray_ctr_module (// Outputs gray_ctr, // Inputs inc_en, clk, global_async_reset ); //================== // Parameters //================== parameter SIZE = 16; //================== // Port declarations //================== output [SIZE-1:0] gray_ctr; input inc_en; input clk; input global_async_reset; // Global asynchronous reset //==================================================== // Reg, wire, integer, task, and function declarations //==================================================== reg [SIZE-1:0] gray_ctr, binary_ctr; wire [SIZE-1:0] next_binary_ctr_val, next_gray_ctr_val; //================= // Logic //================= assign next_binary_ctr_val = binary_ctr + 1; always @(posedge clk or posedge global_async_reset) binary_ctr <= #1 global_async_reset ? 0 : inc_en ? next_binary_ctr_val : binary_ctr; assign next_gray_ctr_val = ( next_binary_ctr_val >>1) ^ next_binary_ctr_val; always @(posedge clk or posedge global_async_reset) gray_ctr <= #1 global_async_reset ? 0 : inc_en ? next_gray_ctr_val : gray_ctr; endmodule > >Bob Perlman wrote: >> >> On 26 Sep 2003 06:08:08 -0700, dgleeson-2@utvinternet.com (Denis >> Gleeson) wrote: >> >> >Ok I was confused. >> > >> >I did implement a synchronous counter as everybody points out. >> >The glitches are due to routing delays in the FPGA. >> > >> >Now Im not confused any more. >> > >> >All I need now is to find verilog for a 16 bit gray code counter. >> >> Try this. Credit where credit is due: as noted in the comments, the >> gray code calculation comes from a paper by Cliff Cummings. >> >> Bob Perlman >> Cambrian Design Works >> >> //=============================================================== >> // Module >> //=============================================================== >> // >> // Module: gray_ctr_module.v >> // >> // Author: Bob Perlman >> // >> // Description: The Verilog code for a gray-code counter >> // incorporates a gray-to-binary converter, a binary- >> // to-gray converter and increments the binary value >> // between conversions. >> // >> // This code was based on Cliff Cummings' 2001 SNUG paper, >> // "Synthesis and Scripting Techniques for Designing >> // Multi-Asynchronous Clock Designs." >> // >> // Revision history: >> // Rev. Date Modification >> // ---- -------- >> ---------------------------------------------- >> // - 12/29/01 Baseline version >> >> `timescale 1 ns / 1 ps >> module gray_ctr_module >> (// Outputs >> gray_ctr, >> // Inputs >> inc_en, preload_en, preload_val, clk, global_async_reset >> ); >> >> //================== >> // Parameters >> //================== >> >> parameter SIZE = 4; >> >> //================== >> // Port declarations >> //================== >> >> output [SIZE-1:0] gray_ctr; >> >> input inc_en; >> input preload_en; >> input [SIZE-1:0] preload_val; >> input clk; >> input global_async_reset; // Global asynchronous reset >> >> //==================================================== >> // Reg, wire, integer, task, and function declarations >> //==================================================== >> >> reg [SIZE-1:0] gray_plus_1, gray_ctr, bnext, bin; >> integer i; >> >> //================= >> // Logic >> //================= >> >> always @(posedge clk or posedge global_async_reset) >> gray_ctr <= #1 global_async_reset ? 0 : >> preload_en ? preload_val : >> inc_en ? gray_plus_1 : >> gray_ctr; >> >> // Calculate next gray code by: >> // 1) Converting current gray counter value to binary >> // 2) Incrementing the binary value >> // 3) Converting the incremented binary value to gray code >> >> always @(gray_ctr or inc_en) begin >> for (i=0; i<SIZE; i=i+1) >> bin[i] = ^(gray_ctr >> i); >> bnext = bin + 1; >> gray_plus_1 = (bnext>>1) ^ bnext; >> end >> >> endmoduleArticle: 61061
Christian, I'm not sure without seeing your design, but the problem may be cause by a know issue we have in PAR for partial reconfig flows. The fix is scheduled for the next service pack (mid October). Steve Christian Haase wrote: > Hello, > > has anybody tried the partial reconfiguration flow > with the latest ISE version 6.1? Are there any fundamental > differences between the actual and previous releases? > (e.g the MODE = RECONFIG attribute) > > In the assemble phase PAR aborts guiding my design > with the message: > abnormal program termination > (without any further information) > > Thanks in advance. > > Christian > > > >Article: 61062
Eric Smith wrote: >Jim Granville wrote: > > >>Do they have the same ceiling ? >> >> > >Steve Lass <lass@xilinx.com> writes: > > >>Yes, both Windows XP and Linux can address 3G. As far as I know, >>Windows 2000 will >>only address 2G. >> >> > >Running 64-bit Linux on an AMD64 processor (Opteron or Athlon 64) >allows 32-bit Linux applications to access just under 4G. > I didn't know this, but apparently others in the software organization did. Thanks for the info, Steve > I haven't >tried ISE 6.1i on an AMD64 yet, but I expect that it should be able >to P&R larger designs than on 32-bit CPUs. > >None of my designs to date have needed more than 1.5G. > >Article: 61063
"Phil Tomson" <ptkwt@aracnet.com> wrote in message news:bl2dji0svq@enews1.newsguy.com... > In article <6ae7649c.0309261310.11abb60b@posting.google.com>, > Ekalavya Nishada <enishada@yahoo.com> wrote: > >Greetings, > > > >I am new to hardware design and hoping to get a reality check on > >building a lexical analyzer and parser using FPGAs. I can see the > >following options. (snip of options) > >Please post any thoughts/comments/pointers about the various options. > Well, it's an ambitious project, anyway.... > > So are you trying to have a hardcoded interpreter? From what I think > you're saying, you want to build this parser and then (I'm guessing) you > want to either produce some bytecode for a VM (or in this case a real > machine (RM) implemented in the FPGA) or build some kind of AST and walk > it in your FPGA (?). Otherwise I can't see how it makes sense to just > have a parser in an FPGA. > > Fist off, what's the all-fire hurry? Parsing a simple language is pretty > quick as is in software. It would be _lot_ easier to implement your CPU > in the FPGA and then use software to parse and compile the frontend > language into machine code that runs on your CPU. It would be nice to know the reason for the question. As far as I know, parsing of existing languages isn't limiting compilation times. Most languages were designed when machines were much slower than they are today, and if it was a problem then, they might have designed the language to help speed up parsing. I know many cases where that wasn't even true. > Seconly, assuming that I'm guessing wrong and you just want a parser for a > programming language implemented in an FPGA: I think this could be pretty > difficult, but perhaps not impossible especially if you have some large > amount of memory available either inside of or external to your FPGA. > You'd have a bytestream coming in which represents characters of your > tokens, a tokenizer (a state machine) and then another big state machine > that implements your parser. But again, after you've parsed this > language, what do you intend to do with it? As a large part of parsing is finite state machines, and they are pretty easy to build in FPGA's, though with external RAM if they get really huge, that part seems pretty easy. Now, why would you want to do that? The state tables could be generated externally, on a more ordinary machine. Storing a symbol table would be a little more challenging, but I don't think even that should be too hard. Hash algorithms should be easy to implement, for example, or trees if that turns out better. One possibility that I could think of would be in pattern matching. If you consider a program like grep a parser, which signals the point at which it recognizes a correct match, then one could use it for high speed pattern matching. -- glenArticle: 61064
> > Do you have a working protype yet? > > Product. Shipping. Do you have a URL to that product? > > If not, I hope you let us know when you > > succeed at drawing a simple line on the screen > > I think that was back in 1978, maybe '77. Don't remember exactly. That's quite an old product. What is the application? > > and tell us about what parts of your design were challenging > > Working 16 hours a day, seven days a week for nearly two years. Were you self employed or working for someone else? I assume someone else. > > and how you solved your problems. > > I finished. So for some ~20 years your product was fine with rectangles and now needs to be upgraded to diagonals and curves? What a nice, slow moving market you opperate in. In keeping with the theme of poor communication skills: After giving up figuring out how to do it without a frame buffer, the mental block lifted while I was driving. The 45 degree diagonals are pretty easy to generate. Given a starting point you can easily draw a line SW (south west), S, SE, or E. With that you have the basic building blocks to build a diagonal line of arbitrary angle. Working out the algorithm for that will take a bit more work, but hopefully will only involve accumilating an increment with a fractional part. I'm guessing a curve won't be so bad because it's merely a diagonal line whose slope changes over time. If the slope change is a constant, you have an arc of a circle. If the slope change varies over time also, then you have an arc of a oval or something like that. To draw a full circle you can break it into two curves. But unfortunately there's one gotcha. I can't figure out how to draw a diagonal line that lies between W and SW, which is a shame since the idea I'm playing with seems pretty elegant for the time being. It could get ugly fast. Regards, VinhArticle: 61065
> > I think that was back in 1978, maybe '77. Don't remember exactly. > > That's quite an old product. What is the application? ... > So for some ~20 years your product was fine with rectangles and now needs to > be upgraded to diagonals and curves? What a nice, slow moving market you > opperate in. We are squarely in the real of commedy at this point. Thanks for a good laugh. I hope that was your intention. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Martin Euredjian To send private email: 0_0_0_0_@pacbell.net where "0_0_0_0_" = "martineu"Article: 61066
The problem is not insurmountable. LUT's are one possible solution. And some of these solutions consume quite a bit of FPGA resources and might be quite complex. The original reason for my post (which got lost pretty quickly) is to sort of do a survey of available (or well-known) techniques. Based on the response, I gather that this is so rare that very little, if any publicly available literature might exist. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Martin Euredjian To send private email: 0_0_0_0_@pacbell.net where "0_0_0_0_" = "martineu"Article: 61067
>Bypassing the frame buffer means you need to have some fast algorithm to do >some fast drawing. Correct? [Serious old-fart alert.] In the old days, they made vector displays. The beam had X and Y addresses that you could control rather than a raster scan. You could draw individual points. To make a line, you could draw a sequence of points, or use the line drawing hardware assist. Typically, you fed the displa a "display list" which was just a list of commands with an op code set like move absolute, move relative, draw dot, draw line... subroutine call/return. ... There were some amazing ideas discovered back then. If you like clever tricks in this area, feed "hackmem" to google and look in the index under "display". (HACKMEM is a collection of tricks and hacks (in the old meaning of the word) from the PDP-6 days at the MIT AI lab.) Back in those days, the display technology was reasonably well matched to the hardware for processing the display list. I don't remember any hardware to do curves but it seems reasonable. The CTSS machine (modified IBM 7094) at project MAC had a display list that drew lines in 3 dimensions and did the projection to the two dimensional screen in hardware. All you had to do for a 3D rotate was tweak the parameters in a rotation matrix. That technology was OK for wire stick models. If your picture got too complicated the refresh time was slow and you got lots of flicker. (The refresh time was the time to process the whole display list.) I think some of the vector displays may have used electrostatic deflection rather than magnetic. If you think about high quality displays, it's hard to do better than a frame buffer. You get to piggyback on the technology and economics of TV displays. If you don't like frame buffers, you can use an LCD displays where the buffer is included as part of the display. Other old display technologies without frame buffers: Early glass TTYs stored the "picture" as ASCII (rather than raw frame buffer bits) and did the translation to pixels on the fly. You could get TTL chips that were ROMs for 5x7 dot matrix fonts. Some of the address bits were the character. Some were the row within the character that the display was processing now. It might be fun to build a VT-100 in an FPGA. I think Tektronix made a family of displays using storage tubes. No frame buffer needed. Hard to turn a pixel off though. You had to erase the screen (blink, flash) and repaint what you wanted to remain. -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 61068
> We are squarely in the real of commedy at this point. Thanks for a good > laugh. I hope that was your intention. Yes I do get quite sarcastic at times don't I? You orignally posted your question to the group expecting a straight forward responses. I and others couldn't get past the mental block of not using a frame buffer and were trying to convince you to do things the way we thought it should be done. You got frustrated (very understandable) and you quickly fired off a scenario. I suspect what you were trying to tell us in an indirect way, "Okay guys, I know you don't think it can be done without a frame buffer, but let's pretend." Unfortunately within your scenario you made the statement: "In addition to this, due to cost constraints, you are not allowed to have rendering memory for the graphics overlay." Engineers are so anal about little details, we often miss the main point, and like pit bulls we latched onto the claim that memory was too expensive and started to argue with you about that, thinking to ourselves, "Poor misguided soul, the reason he's not using a frame buffer is because he thinks memory is too expensive. Needless to say this increased your frustration further. Dealing with engineers is like playing with a long piece of masking tape. If you're not very careful a part of it will stick to you, and your struggles to free yourself will get you into more of a mess. I know when people misunderstand me my first response to respond quickly and in great length, but that usually leads to more miswording and misunderstanding. Perhaps one approach would have been to state from the outset: "I have already built a device that overlays horizontal and verticle lines without using a frame buffer. I'm trying to add diagonals and curves now." This might have helped us avoid the mental block because you've "proven" it's possible to not use a buffer. Heh of course we probably would have said, "Well switch over to a buffer." Well I better stop before I start getting more long winded. Anyways, just some random thoughts on the situation. My apologies for responding to your frustration with sarcasm, and for not reading some of your words carefully enough, and reading others too carefully. May you have better luck in the future. Regards, VinhArticle: 61069
> [Serious old-fart alert.] LOL. Thanks for the interesting trivia Hal :_) Did you work with any of those particular technologies? Hey, did you hear about the progress they're making in electronic ink? http://www.nature.com/nsu/030922/030922-10.html --VinhArticle: 61070
>The card is authentic and from a reliable company. My question is that >wasn't there any concern at that time(1980) of asynchronous design >that the designer has used so much asynchronous techniques? Do you have the schematics? If so, look for the standard clock-qualifier pattern. The clock will run into one side of an '00 and the qualifying signal will run into the other side. The output is a qualified clock. It will look like an asynchronous clock to the FPGA tools, but it really was synchronous in the designer's mind. This was used because chips like a '374 didn't have a clock qualifing input pin. A common clock distribution scheme in the old TTL days was that the master clock from the osc clocked a FF to square things up and the output of the FF then fanned out to several buffers and they went to another layer of '00s which were buffers and qualifiers. Allways running clocks went through a dummy '00 to balance the skew (and get the polarity right). You could also get similar qualified clocks out of a '138 or '139 by feeding the clock into the enable pin. This gets you 1 of N decoding for things like writing to 1 of several chips. The skew wasn't as well balanced but it generally worked well enough. Note that all of the '00s used in the clock distribution chain were the same technology - no mixing LS and F as that would mess up the clock skew. -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 61071
I worked with PDP11-34's and Tek storage displays back a few (let's leave it at that) years ago. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Martin Euredjian To send private email: 0_0_0_0_@pacbell.net where "0_0_0_0_" = "martineu" "Hal Murray" <hmurray@suespammers.org> wrote in message news:vn9qqgroj34s6d@corp.supernews.com... > >Bypassing the frame buffer means you need to have some fast algorithm to do > >some fast drawing. Correct? > > [Serious old-fart alert.] > > In the old days, they made vector displays. The beam had X and Y addresses > that you could control rather than a raster scan. You could draw > individual points. To make a line, you could draw a sequence of points, > or use the line drawing hardware assist. > > Typically, you fed the displa a "display list" which was just a list > of commands with an op code set like move absolute, move relative, > draw dot, draw line... subroutine call/return. ... > > There were some amazing ideas discovered back then. If you like > clever tricks in this area, feed "hackmem" to google and look in > the index under "display". (HACKMEM is a collection of tricks > and hacks (in the old meaning of the word) from the PDP-6 days > at the MIT AI lab.) > > Back in those days, the display technology was reasonably well matched > to the hardware for processing the display list. I don't remember > any hardware to do curves but it seems reasonable. > > The CTSS machine (modified IBM 7094) at project MAC had a display > list that drew lines in 3 dimensions and did the projection to the two > dimensional screen in hardware. All you had to do for a 3D rotate > was tweak the parameters in a rotation matrix. > > That technology was OK for wire stick models. If your picture got > too complicated the refresh time was slow and you got lots of flicker. > (The refresh time was the time to process the whole display list.) > > I think some of the vector displays may have used electrostatic > deflection rather than magnetic. > > If you think about high quality displays, it's hard to do better than > a frame buffer. You get to piggyback on the technology and economics > of TV displays. > > If you don't like frame buffers, you can use an LCD displays > where the buffer is included as part of the display. > > > Other old display technologies without frame buffers: > > Early glass TTYs stored the "picture" as ASCII (rather than raw > frame buffer bits) and did the translation to pixels on the fly. > You could get TTL chips that were ROMs for 5x7 dot matrix fonts. > Some of the address bits were the character. Some were the row > within the character that the display was processing now. It might > be fun to build a VT-100 in an FPGA. > > I think Tektronix made a family of displays using storage tubes. > No frame buffer needed. Hard to turn a pixel off though. You had > to erase the screen (blink, flash) and repaint what you wanted to > remain. > > -- > The suespammers.org mail server is located in California. So are all my > other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited > commercial e-mail to my suespammers.org address or any of my other addresses. > These are my opinions, not necessarily my employer's. I hate spam. >Article: 61072
"Vinh Pham" <a@a.a> wrote: > Perhaps one approach would have been to state from the outset: "I have > already built a device that overlays horizontal and verticle lines without > using a frame buffer. I'm trying to add diagonals and curves now." Nope. The best approach would have been for you to have read the original post carefully. Let's review it: <START REVIEW> >> I know about the various algorithms to draw lines, circles, etc. >> All of these pretty much rely on painting onto a frame buffer >> that is later used to scan out to a CRT." OK. This guy understands the whole frame buffer thing and knows those algorithms. >> Does anyone know of any algorithms to draw primitives that work >> without the intermediate frame buffer step? OK. The challenge is to find a solution that does not use a frame buffer. Therefore, I should not bother to reply if I'm going to say that he should use a frame buffer, 'cause he already knows how to do that. That is not what he's looking for. He very clearly states that he is interested in solutions that do not use a frame buffer. Telling him that he's a dummy for not using a frame buffer would be missing the point alltogether. This is not the sort of thing that's going to help me pass my reading and comprehension test. I should stick to the constraints as delineated and strongly suppress my urge to go off on a tangent that has nothing to do with the question being asked. This, of course, is not what I'll do 'cause I fully intend to have selective recall of what he's asking. I'll give him the answer I want him to hear, whether its applicable or not. I'm happy with my answer. He should be happy as well. <COMMENT> If you know George Carlin, that is what I imagine he would say. :-) >> In other words, the algorithm's input would be the current x,y pixel >> being painted on the screen and the desired shape's parameters. OK. He's also naililng down the input parameters, which are simple. >> Horizontal and vertical lines (and rectangles), of course, are easy. OK. Yes, he understands very well that pure H and V lines are not an issue. >> But, how do you do curves or diagonal lines? Aha! He's no dummy. He knows where things get messy. If I were to reply, this is where I shoud concentrate my efforts. How do you draw curves or diagonal lines in real time, without using an intermediate frame buffer and with real-time pixel x,y coordintes and graphic entity parameters as your sole input parameters?. >> It seems to me that you'd take y and solve for x, which could produce >> multiple results (say, a line near 0 degrees). You'd have to save the >> results for that y coordinate in a temporary buffer that would then >> be used to compare to x. That's as simple as I can come up with." OK. He's also proposing one possible approach. However, the question he posed earlier suggests that he's looking for alternative solutions or commentary on what he is proposing. <END REVIEW> It was all there buddy. Dead topic. Move on. Have a great weekend. :-) -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Martin Euredjian To send private email: 0_0_0_0_@pacbell.net where "0_0_0_0_" = "martineu"Article: 61073
Andy Peters wrote: > > rickman <spamgoeshere4@yahoo.com> wrote in message news:<3F7257C6.91BECAD0@yahoo.com>... > > > That is the part I am not clear about. These traces are all individual > > circuits. If you have the luxury of a lot of open board space to route > > straight lines here and there, then sure, you can make each one very > > similar. On a small, tight board it will be very difficult to make them > > that similar. If the signal is critical enough to require a simulation, > > then I expect I would need to simulate each of them. > > The traces may not actually be individual circuits! While Austin (and > others, including myself) advocate SI simulations to show the effects > of terminations on line ringing and what not, what can _really_ bite > you in the ass is crosstalk. In one particular case, there was an > issue with crosstalk from a data bus affecting a nearby reset line. > When a simulation was finally run, the problem was obvious. > > So, yeah, I'd say that not bothering to simulate these lines because > they weren't clocks and because "the signals can bounce around for a > couple of ns" was a bad idea. Ringing is not the cause of crosstalk. Crosstalk is coupling of the primary wavefront coupling to adjacent traces due to proximity over excessively long lengths. No amount of simulation will correct a problem if you don't understand what is going on. > The time spent simulating upfront is well worth the investment. You > simulate your FPGA logic, because you'd rather spend the time in front > of the computer, rather than in the lab with a 'scope probe? Same > thing here, except that a board spin is a lot more expensive than > reprogramming that ISP EEPROM. > > Oh, yeah, the 'scopes and probes required to really see these types of > problems in the lab cost more than the SI software. But you are making assumptions about the circuits I am building. The original issue was the fact that the Spartan 3 chips are sensitive to even short term overvoltage due to ringing. Like I have said, I have never seen this in any data sheet until now. All the chips I have worked with either have specifically indicated that there would be no problem of damage due to small, short term transitions outside the rated voltage spec, or this was stated when the manufacturer was contacted. The Spartan 3 chips are the first I have heard of this being specifically contraindicated. > > I am surprized that the Spartan 3 chips are so sensitive to over and > > undershoot that this has become a major issue. I have seen lots of high > > speed boards and none had FPGAs or any other chips that needed this > > degree of analysis to prevent damage. > > Perhap Xilinx are simply erring on the side of caution. They're > informing the user of potential issues when they can be dealt with -- > in the design phase -- rather than when boards are RMAed and customers > are pissed. > > In any event, I think Austin's tone was one of frustration -- after > all, he's trying to help you! Basically, he's saying that if you do > the simulations up front, your board can be designed such that these > potential problems don't turn out to be actual problems. He is not the only one who is frustrated. My questions were not about the issues of designing for SI, but about the sensitivity of the Spartan 3 chips to damage from ringing. His replys are not responsive to my comments and questions. I can do a few simple calculations to get worst case numbers for ringing on a 6" trace. I don't need to use expensive software that does the same calulation with a few extra variables thrown in that simply fine tune the calcs. The other reason that I can't simulate the signals up front is because the Spartan 3 in this design will be driving signals to multiple daughter boards that are not designed or even planned yet. Obviously this will have to be dealt with at the design level when the time comes. -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 61074
> It was all there buddy. I am bested, I concede you the field. > Dead topic. Move on. Have a great weekend. :-) Why thank you :_)
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z