Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hello Steven I have actually purchased A Xilinx kit thinking that linux version of webpack would be available. I do not have Windows installed on my system. Is there any plan to release webpack for Linux users? I am hoping Xilinx would release one very soon. Regards - puneet > > Hi Steven, > > I've noticed that the WebPack software only runs on a Windows system. > Is there any chance that it will be released for Linux some day? I > usually don't work on de Windows platform, so it is annoying me like > hell, being forced to use it... > > Thanks. > > Kind regards, > > YvesArticle: 61076
Ekalavya Nishada wrote: > Greetings, > > I am new to hardware design and hoping to get a reality check on > building a lexical analyzer and parser using FPGAs. Jan Gray, begetter of the excellent but more or less defunct fpgacpu.org site, has discussed this. Search in Google.Article: 61077
Martin Euredjian wrote: > I finished. Isn't design work is like writing a book. You cannot finish, you can only abandon.Article: 61078
Hi, e-mail address with own domain is the best. I used for the first time. After I started to receive this "microsoft patch", I simply blocked it. Now I changed to yahoo, which recognizes this spam too. /Vakaras/Article: 61079
I'd report it directy to Altera... Becouse it's noncenses. P.S. Quartus II 3.0 version is available. Maybe this one will be better... /Vakaras/Article: 61080
Puneet Goel <puneet@computer.org> wrote: : Hello Steven : I have actually purchased A Xilinx kit thinking that linux version : of webpack would be available. : I do not have Windows installed on my system. : Is there any plan to release webpack for Linux users? I am hoping I heard that the next version of webpack. scheduled sometimes next year, should support Redhead. Meanwhile, try running webpack with a recent, well configured Wine : Xilinx would release one very soon. : Regards Please, no TOFU( Full quote at bottom) Bye -- Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------Article: 61081
"SneakerNet" <nospam@nospam.org> wrote in message news:zQJcb.159026$JA5.3914623@news.xtra.co.nz... > Hi All > > I have a Nios Development Board that has a crystal osciallating at 50MHz. > This is correct as I have seen the waveform and measured the frequency on an > osilloscope. > > I am trying to implement USB Prototcol, for which I need a clock speed of > 48MHz. How can I reduce the clock speed from 50 to 48. I have written a code > that reduces a given speed to any speed, however it has its limitations. > The code is presented below. This code is fully generic, thus user only has > to give the current clock speed and the wanted clock speed. This code works > fine as I am using this code to reduce the clock speed to 12.5MHz and 25Mhz. > However it does not work for 30Mhz and 48Mhz as the result is a fraction and > my code can't handle it. > > How can i fix this. How can I generate a clock of 48Mhz given that the > crystal is 50Mhz. > Pls Advice (Aplogoies in advance as the code does not have any comments, but > it is very self-explanatory..) > > Regards > ======================================================= > LIBRARY IEEE; > USE IEEE.std_logic_1164.all; > USE IEEE.std_logic_arith.all; > > entity slow_clk is > generic ( > Clock_Speed : integer := 50000000; > New_ClkSpeed: integer := 50000000 > ); > port ( > clock : in std_logic; > slow_clock : out std_logic > ); > end entity slow_clk; > > architecture behavioural of slow_clk is > constant con_StopCnt : integer := ((Clock_Speed / New_ClkSpeed) / 2); > signal main_cnt : integer range 1 to ((Clock_Speed / New_ClkSpeed) / 2); > signal sig_TmpClk : std_logic; > > begin > slow_clock <= sig_TmpClk; > > process is > begin > wait until rising_edge (clock); > if main_cnt = con_StopCnt then > sig_TmpClk <= not sig_TmpClk; > main_cnt <= 1; > end if; > else > main_cnt <= main_cnt + 1; > end if; > > end process; > > end architecture behavioural; > ======================================================= > > (I'm assuming you're using the Nios Dev Board) It's easy. Just type in 48 MHz as the speed in the SOPC builder for your Nios processor, then double click on the PLL that runs the Nios and goto clock C0 screen and enter 24 and 25 for the multiplier and divisor. (I did this to change my C0 to 68 MHz just to see if it would run that fast - no problem!) You should also be able to use clock C1 as well and then not have to alter the Nios's C0 clock. You could also probably alter the already hooked up E0 clock, but I don't know the details. You probably solved this all by now. If you use Nios in the title I will pick up on it quickly. I am working on a Nios project and would like to discuss as many aspects of this technology as possible. I'd like to hear about your USB progress. We're using an external chip on our custom board, but would be interested in a IP Core implementation. KenArticle: 61082
"Jerry" <nospam@nowhere.com> wrote in message news:vn70f8iodqolc4@corp.supernews.com... > This is from memory since work doesn't have access to newsgroups so here > goes. > > We have a Stratrix development board with the NIOS software package. Along > with that > is FS2 debugger. I have worked through the enclosed tutorials both the HW > and SW. > All was well and the debugger worked fine. I then wanted to add my own > hardware into the > PGA. I created a new project popped open Quartus 3.0 and SOPC builder > creating a NIOS > based system pretty much if not identical to the one in the tutorials. I > used the germs_monitor.c. > Popped open two SDK shells, one for nr -t -r and the other to compile and > down load the > C code. nd -d hello_nios.c and nb hello_nios.out following the tutorial. > Well I get messages that > it (OCI?) can not get the com1 port but the debugger window pops up. No > green bar in the > source code which indicates that communications is down. I have yet to get > this setup, Stratix board and > debugger to work on anything other than the tutorials. While in the SDk > shell I did move to my project > dir. > > Anybody out there have similar experiences? > > Thanks > Jerry > > Hi Jerry, Any progress? I haven't built a Nios system from scratch. I always start with one of the examples. I suspect that some of the supporting files in theNios SDK expect particular names for certain signals and entities. I do have a friend who implemented a Nios16 on a custom board with the smallest Cyclone so it does/can work. :) KenArticle: 61083
A good friend in aerospace tells me a that a common saying in those circles goes something like this: "At one point you'll need to shoot the engineers and fly the darn thing or it'll never be fininshed" -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Martin Euredjian To send private email: 0_0_0_0_@pacbell.net where "0_0_0_0_" = "martineu" "Tim" <tim@rockylogic.com.nooospam.com> wrote in message news:bl3k40$m5e$1$8300dec7@news.demon.co.uk... > Martin Euredjian wrote: > > I finished. > > Isn't design work is like writing a book. You cannot > finish, you can only abandon. > >Article: 61084
ptkwt@aracnet.com (Phil Tomson) wrote in message > > Well, it's an ambitious project, anyway.... > > So are you trying to have a hardcoded interpreter? From what I think > you're saying, you want to build this parser and then (I'm guessing) you > want to either produce some bytecode for a VM (or in this case a real > machine (RM) implemented in the FPGA) or build some kind of AST and walk > it in your FPGA (?). Otherwise I can't see how it makes sense to just > have a parser in an FPGA. > I apologize for not providing the full context. What I am thinking of is a parser for XML data. I am aware of the possibility of using regular expressions but the power of regular expressions is not sufficient for the processing I am thinking of. What I envision is a lot of XML data being quickly parsed and some action taken. > Fist off, what's the all-fire hurry? Parsing a simple language is pretty > quick as is in software. It would be _lot_ easier to implement your CPU > in the FPGA and then use software to parse and compile the frontend > language into machine code that runs on your CPU. > I agree with you here. It does not make sense to optimize parsing when the time spent there is a small portion of the total time as in compilers or interpreters. But, if the tokenizing and parsing time dominate, then it makes a lot of sense to make it as fast as possible. I expect most of the data processing time to be spent in parsing and comparitively little in processing the parser output. (No hard data on that; that is my hypothesis still) I might also add that I am drawn to FPGAs due to the possibility of doing things in parallel. Processing of UTF encoded Unicode and tokenizng should be an order of magnitude faster than doing them on general purpose computers. Then there is always the possibility of doing pattern matches on the parse tree in parallel. > Seconly, assuming that I'm guessing wrong and you just want a parser for a > programming language implemented in an FPGA: I think this could be pretty > difficult, but perhaps not impossible especially if you have some large > amount of memory available either inside of or external to your FPGA. > You'd have a bytestream coming in which represents characters of your > tokens, a tokenizer (a state machine) and then another big state machine > that implements your parser. But again, after you've parsed this > language, what do you intend to do with it? > As should be evident by now, it is not a programming language parser. My question was really about the feasibility of doing a hardwired parser. I just wanted to know if I am completely crazy or just a little :-) > Phil Thanks!Article: 61085
"Glen Herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message news:<RK3db.592141$Ho3.113886@sccrnsc03>... > "Phil Tomson" <ptkwt@aracnet.com> wrote in message > news:bl2dji0svq@enews1.newsguy.com... > > In article <6ae7649c.0309261310.11abb60b@posting.google.com>, > > Ekalavya Nishada <enishada@yahoo.com> wrote: > > It would be nice to know the reason for the question. As far as I know, > parsing of existing languages isn't limiting compilation times. Most > languages were designed when machines were much slower than they are today, > and if it was a problem then, they might have designed the language to help > speed up parsing. I know many cases where that wasn't even true. > Apologies again as I have not been clear in my questions. As you mention later in your response, I am exploring this as an approach to parsing of XML data. More to the point, my question was to ask the FPGA experts here a) is it feasible to build a hardwired parser for a small language? b) what sort of performance improvements one might see even while interpreting parser tables as compared to doing the same in a regular CPU? > > One possibility that I could think of would be in pattern matching. If you > consider a program like grep a parser, which signals the point at which it > recognizes a correct match, then one could use it for high speed pattern > matching. > > -- glen Please see my response to Phil's questions also. Thanks!Article: 61086
> > Are you absolutely sure you implemented the exact same code? > > =-a Yes, I am. Just copied the files from one directory to another. I can't send the code to Altera because its proprietary. Wanted to check if anyone else has seen a similar problem and has managed to solve it. Thanks, PrashantArticle: 61087
On a sunny day (26 Sep 2003 14:00:52 -0700) it happened ramntn@yahoo.com (ram) wrote in <61c2cc9d.0309261300.38798162@posting.google.com>: >the same with me, my inbox is pounded with bulk mail and i exceed storage in 20 min. >i lost my id too >Is there any soln to get away from this >ram > I found some sort of solution here: www.sneakemail.com The way it works is that they create a random email address at their domain, and you use that. Then they forward it to your real email. The sender does not get to see the real email address. Once the spammers get hold of it, you simply generate a new random address. I have it now on my website, for feedback, such a random address (for feedback for open source software), and it seems to work (just testing). Once they spam it, I will just generate a new one, and change the link. Also I followed the advice of some person here, (thank you for the link, mm they should have referral fees) and bought my own domain from www.mydoname.com.. That site actually is using domainsbyproxy.com. www.panteltje.com is up now (redirect). Did cost me 25 $ for a year. We will see how it goes from here, still need to print new cards (yahoo email was on it). Yahoo is still full within the hour... Anyway its cool to have your own domain :-) JPArticle: 61088
enishada@yahoo.com (Ekalavya Nishada) wrote in message news:<6ae7649c.0309261310.11abb60b@posting.google.com>... > Greetings, > > I am new to hardware design and hoping to get a reality check on > building a lexical analyzer and parser using FPGAs. I can see the > following options. > > 1. A hardwired implementation of some or all of lexer&parser to > maximize the performance. > Cool. But why? :) Don't programmers spend 99% of their time debugging instead of compiling?Article: 61089
>I apologize for not providing the full context. What I am thinking of >is a parser for XML data. I am aware of the possibility of using >regular expressions but the power of regular expressions is not >sufficient for the processing I am thinking of. What I envision is a >lot of XML data being quickly parsed and some action taken. Where is all your XML data coming from? How are you going to get it to the FPGA? I'm not a parsing wizard. I'm pretty sure you could do much of the work in an FPGA. But what do you do when you find a name that needs a new entry in a symbol table as compared to an atom that can be turned into a simple code number? Assume the data is coming off the net. That's slow. You can parse it with the CPU. Assume it's coming off the disk. Why not parse it once and cache the answer? >I might also add that I am drawn to FPGAs due to the possibility of >doing things in parallel. Processing of UTF encoded Unicode and >tokenizng should be an order of magnitude faster than doing them on >general purpose computers. Then there is always the possibility of >doing pattern matches on the parse tree in parallel. You can't do things in parallel unless you have parallel paths through the bottleneck. What's the bottleneck? CPU? Memory? With an FPGA, it would probably be getting data to/from the FPGA. -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 61090
> Apologies again as I have not been clear in my questions. As you > mention later in your response, I am exploring this as an approach to > parsing of XML data. More to the point, my question was to ask the > FPGA experts here a) is it feasible to build a hardwired parser for a > small language? b) what sort of performance improvements one might see > even while interpreting parser tables as compared to doing the same in > a regular CPU? Yes this is doable and you get really good results. http://www.eetimes.com/story/OEG20020819S0033 http://tarari.com/products-xml.html http://tarari.com/about-technology.html Regular CPUs just can't keep up. SteveArticle: 61091
Symon, The problem with using the pullup/pulldown idea (if it's even possible) is that the values of the pullups and pulldowns are not tightly controlled. The common-mode point would thus not be in the middle of your 3.3V supply. What might work is: If you're able to enable the pullups, then the intrinsic diodes of the IOB's will keep the positive peaks of the signal clamped to one diode drop above 3.3V (assuming it's cap coupled from your source). The negative peaks should be well below the 3.3V supply. Also, the diode current will be small. You should check with Xilinx to see if this approach will work, because the signal swing may not be within their diffamp's input range. Bob "Symon" <symon_brewer@hotmail.com> wrote in message news:a28bc07f.0309261442.131ae253@posting.google.com... > Dear All, > Problem. > I've just been let down by a oscillator manufacturer. They > can only make the ordered 3.3V differential LVPECL oscillator parts > work at 5V. Some excuse about their quartz supplier. So, I can't stick > 5V PECL into my 3.3V Virtex-E differential input, it's outside the > common mode range. So, I could AC couple it with a couple of caps > after the PECL driver's emitter resistors. I then need to bias the > signal into the common mode range of the VirtexE diff input. > Question. > Anybody know if I could somehow activate the internal > pullup resistor on one input and the pulldown on the other to bias the > signal in the middle of the supply? There's already a 100 ohm > termination resistor between the pins. Or, any better ideas? > cheers all, Syms. > > p.s. I know I could use more resistors to do this biasing, but the > board layout makes this awkward. The VirtexE is, of course, a BGA.Article: 61092
No luck, my FAE won't call back and I don't have much hair left to pull. Maybe he is on vacation and Monday all will be right. keep you posted. Jerry "Kenneth Land" <kland1@neuralog1.com> wrote in message news:vnb2eiqgd5fd96@news.supernews.com... > > "Jerry" <nospam@nowhere.com> wrote in message > news:vn70f8iodqolc4@corp.supernews.com... > > This is from memory since work doesn't have access to newsgroups so here > > goes. > > > > We have a Stratrix development board with the NIOS software package. Along > > with that > > is FS2 debugger. I have worked through the enclosed tutorials both the HW > > and SW. > > All was well and the debugger worked fine. I then wanted to add my own > > hardware into the > > PGA. I created a new project popped open Quartus 3.0 and SOPC builder > > creating a NIOS > > based system pretty much if not identical to the one in the tutorials. I > > used the germs_monitor.c. > > Popped open two SDK shells, one for nr -t -r and the other to compile and > > down load the > > C code. nd -d hello_nios.c and nb hello_nios.out following the tutorial. > > Well I get messages that > > it (OCI?) can not get the com1 port but the debugger window pops up. No > > green bar in the > > source code which indicates that communications is down. I have yet to get > > this setup, Stratix board and > > debugger to work on anything other than the tutorials. While in the SDk > > shell I did move to my project > > dir. > > > > Anybody out there have similar experiences? > > > > Thanks > > Jerry > > > > > > Hi Jerry, > > Any progress? I haven't built a Nios system from scratch. I always start > with one of the examples. I suspect that some of the supporting files in > theNios SDK expect particular names for certain signals and entities. > > I do have a friend who implemented a Nios16 on a custom board with the > smallest Cyclone so it does/can work. :) > > Ken > >Article: 61093
"Ekalavya Nishada" <enishada@yahoo.com> wrote in message news:6ae7649c.0309270903.22994d03@posting.google.com... > "Glen Herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message news:<RK3db.592141$Ho3.113886@sccrnsc03>... > > "Phil Tomson" <ptkwt@aracnet.com> wrote in message > > news:bl2dji0svq@enews1.newsguy.com... > > > In article <6ae7649c.0309261310.11abb60b@posting.google.com>, > > > Ekalavya Nishada <enishada@yahoo.com> wrote: > > > > It would be nice to know the reason for the question. As far as I know, > > parsing of existing languages isn't limiting compilation times. Most > > languages were designed when machines were much slower than they are today, > > and if it was a problem then, they might have designed the language to help > > speed up parsing. I know many cases where that wasn't even true. > > > Apologies again as I have not been clear in my questions. As you > mention later in your response, I am exploring this as an approach to > parsing of XML data. More to the point, my question was to ask the > FPGA experts here a) is it feasible to build a hardwired parser for a > small language? b) what sort of performance improvements one might see > even while interpreting parser tables as compared to doing the same in > a regular CPU? That makes a lot of sense. Somehow "lexer and parser" implies "programming language" to many people, as it is a common use for them. Many XML parsers are pretty slow, though I don't know that they have to be. -- glenArticle: 61094
Martin Euredjian <0_0_0_0_@pacbell.net> wrote: > "Vinh Pham" wrote: > >> You've got quite a set of challenging constraints there. So you're saying >> an FPGA is cheaper than using a frame buffer and a micro-controller? > > Well, yes. I have to process half a billion pixels per second. Kinda hard > to do with a microprocessor. Just buy a NVIDIA GPU. It might just about be able to do it. > >> Well if you pull this off, it'll be quite impressive. Please let us know > of >> your progress. > > It's nothing to warrant being impressed. Seriously. It's just a matter of > implementation. > But in the similar vein, hard AI is just a small matter of programming. -- Sander +++ Out of cheese error +++Article: 61095
Hi, I'm trying to implement a bidirectional bus in my code. (VHDL, APEX20K1500E). But I'm having some trouble which brought me to ask the question : How do I specify the direction signal while using a bidirectional bus ? I dont find myself setting any enable signals when using the bidirectional bus. I would appreciate it if someone could explain how this works. Thanks, PrashantArticle: 61096
On 27 Sep 2003 17:30:57 -0700, prashantj@usa.net (Prashant) wrote: >Hi, > >I'm trying to implement a bidirectional bus in my code. (VHDL, >APEX20K1500E). But I'm having some trouble which brought me to ask the >question : > >How do I specify the direction signal while using a bidirectional bus >? I dont find myself setting any enable signals when using the >bidirectional bus. I would appreciate it if someone could explain how >this works. A bidirectional signal usually signifies read/write access to a resource (a memory location or a control register). If this is your application, the direction signal controls the enable signal of the bus drivers. In other words, when you read the data travels in one direction and when you write it travels the other direction. You set the enable signal correspondingly. As an example assume there is a driver which is active high for output enable from the memory to the outside, and there is an active high read, low write signal (rd_wrb). In this case, you'd connect the rd_wrb signal to the oe pin of the memory and when rd_wrb is one, data is driven from the memory to the bus and when rd_wrb is zero, memory uses the data on the bus. Hope this helps, Muzaffer Kal http://www.dspia.com ASIC/FPGA design/verification consulting specializing in DSP algorithm implementationsArticle: 61097
"Jake Janovetz" <jakespambox@yahoo.com> wrote in message news:d6ad3144.0309261418.2910db14@posting.google.com... > What about the case where sync_reset=0 and clk_ena=0? Your code > doesn't describe the desired behavior for this case. Compare to the standard DFF description below. It is not required to explicitly specify what to do when CE='0'... process(clk) begin if rising_edge (CLK) then if CE='1' then Q <= D; end if; end if; end process; /Mikhail > > "MM" <mbmsv@yahoo.com> wrote in message news:<bl1i00$76rai$1@ID-204311.news.uni-berlin.de>... > > process(clk) > > begin > > if rising_edge (clk) then > > if sync_reset='1' then > > outf <= '0'; > > elsif clk_ena='1' then > > outf <= '1'; > > end if; > > end if; > > end process; > > > > Thanks, > > /MikhailArticle: 61098
I've had the same email address for almost eight years now, and I've never tried very hard to hide it, meaning that when I post to public forums like this I use my real email addr. I'm now receiving on the order of 300-400 (brutally repetitive) spam emails per 24hr period, and in the last couple of weeks I've been getting about 100-200 bogus Microsoft-security-patch-with-virus-attachment emails per day on top of that (Norton Antivirus pops up a warning box for each one it detects and I have to manually step through them). I've been patiently waiting for the Microsoft patch garbage to die off, but it hasn't. I'm a consultant and have my own domain name (nextstate.com), but I've finally decided to abandon it and start using a roadrunner email address. I won't bore you with my rage and frustration, but I'm curious how other people are avoiding, filtering out, or fighting back against this crap. Shouldn't the ISPs be attacking this problem with a little bit more enthusiasm? Very pissed off in San Diego, RJSArticle: 61099
I stand corrected.. but as with metastability.. you always get a 50/50 chance of being right :-) but I believe setup or hold both give the same problem. Simon "Vinh Pham" <a@a.a> wrote in message news:SpScb.15906$5z.8325@twister.socal.rr.com... > > congratulations.. I think you have simulated metastability :-) > > dat doesn't need to be in your sensitivity list as dat changing won't > affect > > the simulation result unless clk changes. > > > > What you are seeing is a setup time violation. > > Simon, I think if you carefully look at the timing diagram, it's clearly a > hold-time violation. And what an awful flip-flop, to have such a long > meta-stability resolution time. Perhaps an upgrade to Aldec 6.1 will > provide better performing flip-flop models. > > Just joking, of course :_) > > > Regards, > Vinh > > >
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z