Rank: Advanced Member
Groups: Registered, Registered Users, Subscribers, Unverified Users Joined: 10/28/2004(UTC) Posts: 3,111 Location: Perth, Western Australia
Was thanked: 16 time(s) in 16 post(s)
|
smg wrote:Programs suggested by you seem to be very powerful. However they seem to be huge also. What do you mean by huge? Large program sizes? Why, these days, is anyone worried about program sizes? If you are running out of disk space, look carefully at what you have on your disk and clean out the overheads and crap that resides there, or, spend less than $200 and get a bigger, faster HD. My local hardware dealer wrote:320GB - SATA2 - Western Digital Hard Drive - 8MB Cache - 7200rpm [WD3200JS]. Large storage space,cool, quiet, and fast. Ideal for high-performance family and business computing. $173.95 There seems to be a lot of misconception that the more information you have on a hard drive, the slower the hard drive is? Bull! The read/write access time difference between a full disk and an empty disk is very nearly zero. The trick is to make sure the data is written contiguously, which means defragmenting the drive every now and then to keep the files all in one piece. Microsoft wrote:One of
the biggest reasons for slow disk access is fragmentation. As you
probably know, disk fragmentation occurs as you create and delete files
on your hard disk. When you initially begin writing files to a new hard
disk, the disk stores the files in linear order. However, when you
delete a file, it leaves a gap of unused space between other files. To
avoid wasting this empty space, your computer will try to store other
files there. If a new file is bigger than the file that was erased,
your hard drive will store as much of the new file as it can in the
empty space and will store the rest of the file elsewhere on the drive.
The new file becomes fragmented. Fragmentation slows the rate at
which your computer can access files from the hard disk, because the
hard disk's read or write head must move around to different areas of
the disk to access various parts of the file. Each time that the head
moves, your hard disk must stop reading or writing until the move is
complete. The speed is not related to HOW many files on the disk, but the manner in which they are stored. Exceptionally small marginal improvements in speed can be made by writing the most commonly used files to the edge of the disk where the read/write head resides to reduce the amount of 'travel' time. If you are concerned about this time then you are sailing WAY too close to the edge and should probably get out of the house a little more often, go for a walk in the park to chill out for a while! -- Similarly: If speed is your issue, then get some more RAM. I have 1GB of RAM, and even running all of my normal files, FireFox with at least 10 tabs open, email, MS, ScientificWorkPlace, VoIP, all my antivirus and protection, background clients and security etc I am still under 600MB of RAM used. My ENTIRE ASX data folder, which includes a LOT of duplicated information ASX100, ASX200, ASX300 etc is still only 329MB so even if I was running an exploration/system test and uploaded the entire ASX history into RAM, I would still have some room left over! If you have less than 1GB RAM, spend about $130 to get 1GB of DDR400 RAM. This will improve system performance across EVERY application dramatically! I have had discussions with several people recently about computer and computing speeds. All of the people I have been talking to all have machines that exceed my machine's capabilities and performance in EVERY aspect, and yet I can run an MS exploration faster than all of them! Why? Because I have carefully coded my systems to be expedient, I reduce my data universe to a manageable size and don't scan every single stock, option, warrant etc in the hope that some magical equity will appear! Good data management will beat a fast computer any day! Hope this helps. wabbit [:D]
|