Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
My ISP has a spam filter.. and a Virus checker if you want it.. Doesn't help the news groups tho.. only emails. "Robert Sefton" <rsefton@nextstate.com> wrote in message news:YVsdb.12231$T46.6679@twister.socal.rr.com... > I've had the same email address for almost eight years now, and > I've never tried very hard to hide it, meaning that when I post to > public forums like this I use my real email addr. I'm now > receiving on the order of 300-400 (brutally repetitive) spam > emails per 24hr period, and in the last couple of weeks I've been > getting about 100-200 bogus > Microsoft-security-patch-with-virus-attachment emails per day on > top of that (Norton Antivirus pops up a warning box for each one > it detects and I have to manually step through them). I've been > patiently waiting for the Microsoft patch garbage to die off, but > it hasn't. > > I'm a consultant and have my own domain name (nextstate.com), but > I've finally decided to abandon it and start using a roadrunner > email address. I won't bore you with my rage and frustration, but > I'm curious how other people are avoiding, filtering out, or > fighting back against this crap. Shouldn't the ISPs be attacking > this problem with a little bit more enthusiasm? > > Very pissed off in San Diego, > > RJS > >Article: 61101
> I've had the same email address for almost eight years now, and > I've never tried very hard to hide it, meaning that when I post to > public forums like this I use my real email addr. I'm now > receiving on the order of 300-400 (brutally repetitive) spam > emails per 24hr period, It's well known that spammers harvest usenet. If you post without munging you will get spam. There are two approaches to reducing spam. One is to block the mail so it never gets to your mail server. The other it to filter it into a junk bin (or bit bucket) after it arrives. With either approach, you have to decide how many false positives (lost valid mail) you are willing to tolerate and compare that to how much you care about false negatives (spam getting through). Opinions differ widly and often cause flame wars. If you run your own mail system, you can probably get rid of most of it by using various block lists. It's a pain. You basically have to become a block list wizard as well as all of your other sysadmin duties. For filters, SpamAssain gets good comments. I haven't used it. It runs on your system rather than on your ISPs mail system so you can use it even if your ISP doesn't do much/enough anti-spam work. You have to download the junk first so it may not be good enough if your link is full. There are other similar filtering programs. I think there are versions of SpamAssin that run on the mail server. You can outsource all the blocking/filtering. SpamCon has many good resources. In particular, start at http://www.spamcon.org/recipients/index.shtml and check out the link to filtered email services. For usenet, you can use a munged (fake) address. That's a pain since the replies you actually want won't work without manual editing and people like me who miss that step just go "Aw, shit", and dump the bouncee message unless it's really important or interesting. You can do tricks like using tagged addresses (foo+xxx@wherever) where the xxx part is a time stamp and messages older than some cutoff are automatically rejected. Details like the "+" depend upon your system, and it may not be supported. You can also use tagged addresses to find out who is passing your address on to others. Just assign an xxx whenever you give out an email address and remember who/when. (Doesn't work for business cards.) > and in the last couple of weeks I've been getting about 100-200 bogus > Microsoft-security-patch-with-virus-attachment emails per day on > top of that (Norton Antivirus pops up a warning box for each one > it detects and I have to manually step through them). I've been > patiently waiting for the Microsoft patch garbage to die off, but > it hasn't. The security patch IS the latest virus. Pretty good social engineering, but easy to filter out. Some people advocate filtering out anything that MS mail readers might execute. Another part of this mess is that the first wave of anti-virus filters sent back a "You have a virus" message. That was OK before the viruses started forging return info. Now the bounces can be as much of a problem as the viruses. The best anti-spam discussion resource I know of is SPAM-L. The FAQ is at http://www.claws-and-paws.com/spam-l/tracking.html Best to read the FAQ and lurk for a day or three rather than jumping in with something like your message here. Yes, ISPs should be attacking the spam problem much more agressively. Unfortunately, nobody has figured out how to convince them to do that. Spam is like polution or graffiti. The guy in a position to fix it doesn't have any economic incentive to do that. >My ISP has a spam filter.. and a Virus checker if you want it.. >Doesn't help the news groups tho.. only emails. Well run news servers don't get much spam, at least not on the groups I read. I don't know the details of how it's done. I get news via supernews. I think there are others that are also essentially spam free. Ugh. Sorry for all the OT clutter if everybody is tired of this crap. -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 61102
> I stand corrected.. but as with metastability.. you always get a 50/50 > chance of being right :-) Heh I had no clue which it was. > but I believe setup or hold both give the same problem. I'm no expert, but that sounds reasonable. In the end, it's still A Bad Thing (tm). --VinhArticle: 61103
> Very pissed off in San Diego, > > RJS 300-400? Ouch :_( I made the mistake of using my real email _once_ and I instantly got hit with spam mail. Luckily I'm only getting 40-80 every morning. I have my outlook sorting emails into folders based on which of my many email addresses they're sending to. Luckily and inadvertently, all the spam email seems to go to my default Inbox folder, so I usually do an Cntrl-A and delete them all. Still, it's a hassle to keep doing it. Since other more knowledgeable folks have given us interesting info, I'll bring up a slightly related topic. Deja Vu (now owned by Google) archives all newsgroup postings (and I'm sure anyone could do it if they want to, i.e. U.S. government). One way to prevent Deja from archiving your posts, if you want to, is to add to the Keywords field: "world" on one line and "x-no-archive" on another. Outlook express doesn't allow you to automate it, unfortunately (unless they recently added the feature) Of course this is a voluntary thing on Deja and technically they really don't have to respect your request. If your daring, trying to Google your name with maybe a few personal details (where you live, lived, highschool, college, etc.). We live in interesting times. Regards, VinhArticle: 61104
Robert Sefton wrote: > I've had the same email address for almost eight years now, and > I've never tried very hard to hide it, meaning that when I post to > public forums like this I use my real email addr. I'm now > receiving on the order of 300-400 (brutally repetitive) spam > emails per 24hr period, and in the last couple of weeks I've been > getting about 100-200 bogus > Microsoft-security-patch-with-virus-attachment emails per day on > top of that (Norton Antivirus pops up a warning box for each one > it detects and I have to manually step through them). I've been > patiently waiting for the Microsoft patch garbage to die off, but > it hasn't. > > I'm a consultant and have my own domain name (nextstate.com), but > I've finally decided to abandon it and start using a roadrunner > email address. I won't bore you with my rage and frustration, but > I'm curious how other people are avoiding, filtering out, or > fighting back against this crap. Shouldn't the ISPs be attacking > this problem with a little bit more enthusiasm? > > Very pissed off in San Diego, Install mailfilter. I have it running as a cron job every 3 minutes in debian. I have it set to delete anything over 100kB with the word "microsoft" and a few other buzz words. It deletes the crap directly in your isp pop3 account even on a slow dialup line, so 10MB of M$ crap can be deleted in 20s. After that, email is as spam free as ever. However, your mailbox will fill up overnight unless you leave the PC on. Consider using another OS if viable that doesn't support a spam/anti-spam industry. http://mailfilter.sourceforge.net/ http://mailfilter.sourceforge.net/download.htmlArticle: 61105
I have the same problem, and I use the "Message Rules" in Outlook Express, it has at-least brought down the spam by 50%...as and when the mail arrives look for the keywords in the subject or in the email address or in the from line, and go on adding them. The action I've set for emails with these specific keywords is to delete them from the server so that they are not downloaded at all ( i use a pop3 mailbox). but it is a menace...and scares me from posting on usenet... --Neeraj "jaideep" <jaideep@sasken.com> wrote in message news:c4312ee4.0309252242.2fa34638@posting.google.com... > Jan Panteltje <panteltjeNSOAPM@yahoo.com> wrote in message news:<1064411674.535376@evisp-news-01.ops.asmr-01.energis-idc.net>... > > What more can I say: I lost yahoo now. > > That virus searches Usenet for email addresses, and then sends > > thousands of times that microft fix with the worm. > > Hi, > > I am also facing the same problem.Is there a fix to avoid this? > > Regards, > > JaideepArticle: 61106
Thanks Muzzafer, So how does this work for pins on an FPGA ? I have specified some of the FPGA pins as bidirectional pins. I'm using the bidirectional pins to read from a register in the code or to write from a different register in the code. I dont understand how to specify the direction signal for the pins. I can do this at the signal/register level within the logic. But how are the bidirectional IO pins controlled ? Thanks, Prashant Muzaffer Kal <kal@dspia.com> wrote in message news:<edbcnv85jk9l4u2b2stv225lteknhub3le@4ax.com>... > On 27 Sep 2003 17:30:57 -0700, prashantj@usa.net (Prashant) wrote: > > >Hi, > > > >I'm trying to implement a bidirectional bus in my code. (VHDL, > >APEX20K1500E). But I'm having some trouble which brought me to ask the > >question : > > > >How do I specify the direction signal while using a bidirectional bus > >? I dont find myself setting any enable signals when using the > >bidirectional bus. I would appreciate it if someone could explain how > >this works. > > A bidirectional signal usually signifies read/write access to a > resource (a memory location or a control register). If this is your > application, the direction signal controls the enable signal of the > bus drivers. In other words, when you read the data travels in one > direction and when you write it travels the other direction. You set > the enable signal correspondingly. As an example assume there is a > driver which is active high for output enable from the memory to the > outside, and there is an active high read, low write signal (rd_wrb). > In this case, you'd connect the rd_wrb signal to the oe pin of the > memory and when rd_wrb is one, data is driven from the memory to the > bus and when rd_wrb is zero, memory uses the data on the bus. > > Hope this helps, > > Muzaffer Kal > > http://www.dspia.com > ASIC/FPGA design/verification consulting specializing in DSP algorithm implementationsArticle: 61107
Is there a way to have more than one computer work on compiling a design? I think the answer is "no". In that case. Is there a way to send off the compilation job to another machine? I find myself writing embedded code (using an outside tool) or doing CAD/EDA work while waiting for ISE to complete. Of course, everything slows down to a crawl while ISE is working. It'd be nice to be able to use a different computer on the network to do the crunching (it'd be nicer to have more than one machine parallel process and do it faster!). Any ideas, or am I thinking too far ahead? -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Martin Euredjian To send private email: 0_0_0_0_@pacbell.net where "0_0_0_0_" = "martineu"Article: 61108
"Kenneth Land" <kland1@neuralog1.com> wrote in message news:vnb20los6n5u3c@news.supernews.com... > > "SneakerNet" <nospam@nospam.org> wrote in message > news:zQJcb.159026$JA5.3914623@news.xtra.co.nz... > > Hi All > > > > I have a Nios Development Board that has a crystal osciallating at 50MHz. > > This is correct as I have seen the waveform and measured the frequency on > an > > osilloscope. > > > > I am trying to implement USB Prototcol, for which I need a clock speed of > > 48MHz. How can I reduce the clock speed from 50 to 48. I have written a > code > > that reduces a given speed to any speed, however it has its limitations. > > The code is presented below. This code is fully generic, thus user only > has > > to give the current clock speed and the wanted clock speed. This code > works > > fine as I am using this code to reduce the clock speed to 12.5MHz and > 25Mhz. > > However it does not work for 30Mhz and 48Mhz as the result is a fraction > and > > my code can't handle it. > > > > How can i fix this. How can I generate a clock of 48Mhz given that the > > crystal is 50Mhz. > > Pls Advice (Aplogoies in advance as the code does not have any comments, > but > > it is very self-explanatory..) > > > > Regards > > ======================================================= > > LIBRARY IEEE; > > USE IEEE.std_logic_1164.all; > > USE IEEE.std_logic_arith.all; > > > > entity slow_clk is > > generic ( > > Clock_Speed : integer := 50000000; > > New_ClkSpeed: integer := 50000000 > > ); > > port ( > > clock : in std_logic; > > slow_clock : out std_logic > > ); > > end entity slow_clk; > > > > architecture behavioural of slow_clk is > > constant con_StopCnt : integer := ((Clock_Speed / New_ClkSpeed) / 2); > > signal main_cnt : integer range 1 to ((Clock_Speed / New_ClkSpeed) / > 2); > > signal sig_TmpClk : std_logic; > > > > begin > > slow_clock <= sig_TmpClk; > > > > process is > > begin > > wait until rising_edge (clock); > > if main_cnt = con_StopCnt then > > sig_TmpClk <= not sig_TmpClk; > > main_cnt <= 1; > > end if; > > else > > main_cnt <= main_cnt + 1; > > end if; > > > > end process; > > > > end architecture behavioural; > > ======================================================= > > > > > (I'm assuming you're using the Nios Dev Board) > > It's easy. Just type in 48 MHz as the speed in the SOPC builder for your > Nios processor, then double click on the PLL that runs the Nios and goto > clock C0 screen and enter 24 and 25 for the multiplier and divisor. (I did > this to change my C0 to 68 MHz just to see if it would run that fast - no > problem!) > > You should also be able to use clock C1 as well and then not have to alter > the Nios's C0 clock. You could also probably alter the already hooked up E0 > clock, but I don't know the details. > > You probably solved this all by now. If you use Nios in the title I will > pick up on it quickly. I am working on a Nios project and would like to > discuss as many aspects of this technology as possible. > > I'd like to hear about your USB progress. We're using an external chip on > our custom board, but would be interested in a IP Core implementation. > > Ken > > Hi Ken Thanks for the response. I actually got the 48mhz to work but still bit confused on C1 and E0 clocks (in PLL) but no worries, i guess I'll try and figure that out soon. Regarding USB Implementation, I'm still trying my level best to get it to work. Basically I'm trying to get this core working that was mentioned on this newsgroup sometime back (Japanese version). It does not require any hardware, but i'm not able to make any progress on it. I have added my own PLL and connection the outputs to the leds (for debugging), and there are 2 leds that go on/off (USBEN and USBENLED), but becasue of no commenting i have very slight idea as to what's going on. Ken if you are interested in knowing what i have got with regards to USB comm u are welcome to send me a mail @ anangia@mailcity.com with USB/Nios as the subject. I'll be more than happy to send you my progress (though it's bit slow) RegaresArticle: 61109
Uwe Bonnes <bon@elektron.ikp.physik.tu-darmstadt.de> wrote in message news:<bl3snp$t5p$1@news.tu-darmstadt.de>... > Puneet Goel <puneet@computer.org> wrote: > : Hello Steven > > : I have actually purchased A Xilinx kit thinking that linux version > : of webpack would be available. > > : I do not have Windows installed on my system. > > : Is there any plan to release webpack for Linux users? I am hoping > > I heard that the next version of webpack. scheduled sometimes next year, > should support Redhead. Meanwhile, try running webpack with a recent, well > configured Wine > Yes, I have "heard" this too. But... is there somebody from Xilinx that can [officially] confirm this? Actually, in my company the only missing EDA link for Windows - Linux shift is Xilinx ISE. ;O( I don't care very much for a release date --if there really is a Linux ISE coming that would be a relief and will let us plan things accordingly. Regards.Article: 61110
Hello, I spent some time experimenting with the Parallel Slave Mode configuration procedure available in the Spartan-IIE to configure an XC2S300E over the parallel port; the motivation was to use the same interface for configuration and, after startup, for external communication during normal operation. I'm using a CPLD to generate the necessary signals from the IEEE1284 compatibility mode handshake and this is working just fine, but I'd like to follow up on the behavior I observed. If I understand the datasheet correctly, the FPGA expects the user to load the complete bitstream before deasserting the /CS and /WRITE signals, and then procedes to the CRC check. From what I've seen, this doesn't appear to be the case; the device drives DONE high after only 234448 of the documented 234456 configuration bytes are loaded; I'm assuming that I simply did not understand the datasheet correctly with respect to /WRITE and /CS, but why does the configuration complete early? Thanks,Article: 61111
Pablo Bleyer Kocik wrote: > > Actually, in my company the only missing EDA link for Windows - Linux > shift is Xilinx ISE. ;O( I don't care very much for a release date > --if there really is a Linux ISE coming that would be a relief and > will let us plan things accordingly. The Linux version of ISE is already out (except for Impact). It is Webpack that has not yet been released in a Linux version. Since Webpack is basically just ISE limited to certain devices, I would not expect a long wait. As a wild guess, maybe Xilinx wants to use their (probably somewhat more savy) ISE customers to find bugs, before releasing Webpack. -- My real email is akamail.com@dclark (or something like that).Article: 61112
On Sat, 27 Sep 2003 00:38:01 -0400, rickman <spamgoeshere4@yahoo.com> wrote: >Andy Peters wrote: >> >> rickman <spamgoeshere4@yahoo.com> wrote in message news:<3F7257C6.91BECAD0@yahoo.com>... >But you are making assumptions about the circuits I am building. The >original issue was the fact that the Spartan 3 chips are sensitive to >even short term overvoltage due to ringing. Like I have said, I have >never seen this in any data sheet until now. All the chips I have >worked with either have specifically indicated that there would be no >problem of damage due to small, short term transitions outside the rated >voltage spec, or this was stated when the manufacturer was contacted. >The Spartan 3 chips are the first I have heard of this being >specifically contraindicated. I also haven't heard of this before. But then, I haven't used 90nm parts before. <speculation> Perhaps this is the way of the future. </speculation> Allan.Article: 61113
Hi all, In the Xilinx Applicatoin Note 151 "Virtex Series Configuration Architecture User Guide", there are Virtex Equations for LUT SelectRAM Dependent Variables (p.11). I would like to ask are there any equations available for Virtex-II Pro ? I want to do a dynamic partial reconfiguratoin of the LUT SelectRAM. I can find out the slice position in FPGA Editor, but I can't figure out the frame address to pass into the ICAP (Internal Reconfiguration Access Port) for reconfiguration. Thanks in advance ! tkArticle: 61114
"Patrik Eriksson" <patrik.eriksson@netinsight.net> a écrit dans le message de news:3F68229C.8040806@netinsight.net... > The 6.1i/6.1i SP1 unisim DCM model doesn't work in a simulation that > works with the model included in 5.2i SP3. Has anyone else experience > the same problem? What has been changed? Why is the model changed? > > /Patrik Eriksson > Unisim DCM model is change and now the RST signal must be asserted for 3 CLKIN clock cycles. A test exists in VITAL model to print a message if the timing is not correct but the test doesn't work at "power up" because a signal is not correctly initialized. To make this working, do the folowing modifications. in "unisim_VITAL.vhd" and "simprim_VITAL_mti.vhd" files replace all occurences of the folowing line signal rst_reg : std_logic_vector(2 downto 0); by : signal rst_reg : std_logic_vector(2 downto 0) := "000"; and compile the modified files. Now the simulation must generate an error like: # ** Error: Timing Violation Error : RST on instance * must be asserted for 3 CLKIN clock cycles. # Time: 30 ns Iteration: 3 Instance: /testbench/u_2/dcm_i Gerard ThierryArticle: 61115
I wont say I know the perfect way.. but typically you only have a bi-directional bus on the outside world. Inside the FPGA use an output bus and an input bus. Xilinx have internal tristates but they can pig out on resources (I think) but other vendors don't so code portability will suffer if you use / rely on them. tristate_bus : process (enable, dat_o) is begin if (enable = '0') then dat_bus <= (others => 'Z'); else -- if enable = '1' dat_bus <= dat_o; end if; end process tristate_bus; dat_i <= dat_bus; There's a simple tri-state bus in VHDL. I just typed it in without using a VHDL compiler or simulator. so all care no responsibility but its a good starting place. all you have to do is to feed the dat_i to all register inputs and the dat_o from either the internal tristate bus OR (better) from a multiplexer which takes all the outputs from all the registers. Simon here's another .. works well you will note the input multiplexer (case ADD_I) and the tri-state ('Z'). DAT_IO is the tri-state data bus on a HC11. This is actually part of a HC11 interface but is also generic. Read_Registers : process(RST_I, E_I, RD_I, nCICCS_I, ADD_I, SPI_SR, LOSS, SPI_BUSY, TxBUSY, RxBUSY, DCDo, DCDi, LEDS, DeviceReset) is begin if (RST_I = '1') or (E_I = '0') or (RD_I = '0') or (nCICCS_I = '1') then DAT_IO <= (others => 'Z'); else case ADD_I is when IDr => -- Altera Revision DAT_IO <= conv_std_logic_vector(CARD_ID, 8); when STATUSr => -- Status bits DAT_IO(7 downto 6) <= LOSS & SPI_BUSY; DAT_IO(4 downto 0) <= '0' & TxBUSY & RxBUSY & DCDo & DCDi; DAT_IO(5) <= '0'; when LED_STATUSr => DAT_IO <= not LEDS; -- Feed back the leds when SPIr => DAT_IO <= SPI_SR; -- SPI register when others => -- CIC 006/9 DAT_IO <= DeviceReset & conv_std_logic_vector(CIC_NO, 7); end case; end if; end process Read_Registers; CH34_Latch : process(RST_I, E_I) is begin if RST_I = '1' then MODE2 <= O"7"; RE_ROUTE(2) <= '0'; MODE3 <= O"7"; RE_ROUTE(3) <= '0'; elsif falling_edge(E_I) then if (WrCs = '1') and (ADD_I = CH34r) then MODE2 <= DAT_IO(2 downto 0); RE_ROUTE(2) <= DAT_IO(3); MODE3 <= DAT_IO(6 downto 4); RE_ROUTE(3) <= DAT_IO(7); end if; end if; end process CH34_Latch; I hope those give you some Ideas. "Prashant" <prashantj@usa.net> wrote in message news:ea62e09.0309281021.641cbbd9@posting.google.com... > Thanks Muzzafer, > > So how does this work for pins on an FPGA ? I have specified some of > the FPGA pins as bidirectional pins. I'm using the bidirectional pins > to read from a register in the code or to write from a different > register in the code. > I dont understand how to specify the direction signal for the pins. I > can do this at the signal/register level within the logic. But how are > the bidirectional IO pins controlled ? > > Thanks, > Prashant > > Muzaffer Kal <kal@dspia.com> wrote in message news:<edbcnv85jk9l4u2b2stv225lteknhub3le@4ax.com>... > > On 27 Sep 2003 17:30:57 -0700, prashantj@usa.net (Prashant) wrote: > > > > >Hi, > > > > > >I'm trying to implement a bidirectional bus in my code. (VHDL, > > >APEX20K1500E). But I'm having some trouble which brought me to ask the > > >question : > > > > > >How do I specify the direction signal while using a bidirectional bus > > >? I dont find myself setting any enable signals when using the > > >bidirectional bus. I would appreciate it if someone could explain how > > >this works. > > > > A bidirectional signal usually signifies read/write access to a > > resource (a memory location or a control register). If this is your > > application, the direction signal controls the enable signal of the > > bus drivers. In other words, when you read the data travels in one > > direction and when you write it travels the other direction. You set > > the enable signal correspondingly. As an example assume there is a > > driver which is active high for output enable from the memory to the > > outside, and there is an active high read, low write signal (rd_wrb). > > In this case, you'd connect the rd_wrb signal to the oe pin of the > > memory and when rd_wrb is one, data is driven from the memory to the > > bus and when rd_wrb is zero, memory uses the data on the bus. > > > > Hope this helps, > > > > Muzaffer Kal > > > > http://www.dspia.com > > ASIC/FPGA design/verification consulting specializing in DSP algorithm implementationsArticle: 61116
Duane Clark <junkmail@junkmail.com> writes: > Pablo Bleyer Kocik wrote: > > Actually, in my company the only missing EDA link for Windows - > > Linux > > shift is Xilinx ISE. ;O( I don't care very much for a release date > > --if there really is a Linux ISE coming that would be a relief and > > will let us plan things accordingly. > > The Linux version of ISE is already out (except for Impact). It is > Webpack that has not yet been released in a Linux version. Since > Webpack is basically just ISE limited to certain devices, I would not > expect a long wait. What window library does Xilinx use for ISE under Linux? Are they using MainWin? If so Xilinx would have to pay a license fee for every Linux version they sell or give away. If this is the case I would not expect a Linux Webpack since they would loose money on each download. Petter -- A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?Article: 61117
"Jerry" <nospam@nowhere.com> writes: > C code. nd -d hello_nios.c and nb hello_nios.out following the > tutorial. Well I get messages that it (OCI?) can not get the com1 Maybe a stupid question, but did you check the "Enable NIOS OCI debug module" in the Debug tab when you creted your NIOS CPU in the SOPC Builder? In the NIOS more CPU settings you can select if you want your primarely debug port to be any UART you have specified or using OCI over the JTAG port. The latter is useful if you don't have any UARTs in your system. However, I had some problems that some library routines (the plugs ethernet library) will do serial port calls even if there are no UARTs in the system (see news:m3brtht54l.fsf@scimul.dolphinics.no ). Petter -- A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?Article: 61118
Hi, Can anybody tell me a practical way (in VHDL) to count the number of ones in a bit pattern? Counting should be done with combinatorial logic, which means that a for-to loop cannot be used here (but being quite new to FPGA and VHDL, I'm not even sure). The number of bits to be counted is about 30, so a single look-up table is not the solution, using only a Spartan II. Any suggestions? Purpose is amongst others pattern matching, where I count how many bits differ from the expected result. Thanks, AartArticle: 61119
Aart van Beuzekom wrote: > Hi, > > Can anybody tell me a practical way (in VHDL) to count the number of > ones in a bit pattern? Counting should be done with combinatorial logic, > which means that a for-to loop cannot be used here (but being quite new > to FPGA and VHDL, I'm not even sure). The number of bits to be counted > is about 30, so a single look-up table is not the solution, using only a > Spartan II. > > Any suggestions? Purpose is amongst others pattern matching, where I > count how many bits differ from the expected result. > > Thanks, > > Aart > I forgot to mention that the for-to loop cannot be used because I do not want the system to use any clock cycles - the result just has to be there.Article: 61120
"Martin Euredjian" <0_0_0_0_@pacbell.net> writes: > Is there a way to have more than one computer work on compiling a > design? The Solaris version of par (Xilinx place and route tool) can do multiple iterations on multiple hosts (using the -m option to par). > In that case. Is there a way to send off the compilation job to > another machine? I do this all the time. I have a server running Solaris or Linux where I have scripts running through synthesis, map/fit, place & route, static timing analysis, generating programming files, and even upload the new fpga image to the device under test in the lab. Petter -- A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?Article: 61121
I am trying to connect two FPGA development boards together. The boards in question are two Celoxica RC100 development boards. Video in is from an analog camera. The video data is converted to digital and stored on SRAM. There is an expansion header for inter-connectivity. On the other board video out to a monitor occurs after reading data from the SRAM on this board. Have connected to two boards via a ribbon cable connected to the expansion headers. Want to replace this cable with wireless or optical transmission. Is there any development boards available for this. The pixel clock is 10 MHz and there are at least 16 bits per pixel (32 aftere error correction encoding). Access to a 80 MHz on board clock is available. Any help would be much appreciated.Article: 61122
Hi, i'm try to do partial reconfiguration with xapp 290; when i synthesize the project 2 errors occours: error 3317; 3013. Can anyone help me to solve this error? Thanks in advance AntonioArticle: 61123
Can anyone tell me how can i implement a project with this functionality? ThanksArticle: 61124
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z