Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Need a way to speed up QBasic

Status
Not open for further replies.

Guest_imported

New member
Jan 1, 1970
0

Now, if written a copy program wich goes through loops
and writes byte per byte, but i can only get 30 kilobytes
per second because it doesn't go through the loops fast
enough (most likely caused by things i put in the loop,
speed counter,percent counter,time remaining counter etc.)
Is this CPU dependend? How do i speed it up?
 
Can you post the code in your loop?
 
You can speed it up by using a faster way to write, Use PUT, you can also speed it up by using qb1.1 and by using it in DOS. 4.5 is half as fast and 1.1 and Windows slows down dos programs even more
 
There is another way to speed it up, too: buffer the output, so that instead of executing a write operation for every byte (which causes an entire cluster to be read in, modified, and written back out), you can execute one operation per cluster. The following [tt]SUB[/tt]s do some simple buffering, and may be applicable to what you are doing. However, the design is simple; it cannot be intermixed with other output methods and can only handle one file at a time.
[tt]
SUB selectFileToBuffer(fileNumber%, bufferSize&)
SHARED bufferedFileNumber%, bufferedFileBuffer$, bufferedFileBufferOffset%

IF bufferedFileBufferOffset% THEN flushBuffer

bufferedFileNumber% = fileNumber%
bufferedFileBuffer$ = ""
bufferedFileBuffer$ = SPACE$(bufferSize&)
bufferedFileBufferOffset% = 0
END SUB

SUB
writeBufferedChar(character$)
SHARED bufferedFileNumber%, bufferedFileBuffer$, bufferedFileBufferOffset%

bufferedFileBufferOffset% = bufferedFileBufferOffset% + 1
MID$(bufferedFileBuffer$, bufferedFileBufferOffset%, 1) = character$
IF bufferedFileBufferOffset% = LEN(bufferedFileBuffer$) THEN
PUT
#bufferedFileNumber%, , bufferedFileBuffer$
bufferedFileOffset% = 0
END IF
END SUB

SUB
writeBufferedByte(asciiValue%)
writeBufferedChar CHR$(asciiValue%)
END SUB

SUB
writeBufferedString(stringToWrite$)
SHARED bufferedFileNumber%, bufferedFileBuffer$, bufferedFileBufferOffset%

a$ = stringToWrite$ ' Local copy
bytesLeft% = LEN(a$)
DO WHILE bytesLeft%
bytesLeftInBuffer% = LEN(bufferedFileBuffer$) - bufferedFileBufferOffset%
IF bytesLeftInBuffer% > bytesLeft THEN EXIT DO

MID$
(bufferedFileBuffer$, bufferedFileBufferOffset% + 1, bytesLeftInBuffer%) = LEFT$(a$, bytesLeftInBuffer%)
PUT #bufferedFileNumber%, , bufferedFileBuffer$

a$ = MID$(a$, bytesLeftInBuffer% + 1)
bytesLeft% = bytesLeft% - bytesLeftInBuffer%
bufferedFileBufferOffset% = 0
LOOP
MID$
(bufferedFileBuffer$, bufferedFileBufferOffset% + 1, bytesLeft%) = LEFT$(a$, bytesLeft%)
bufferedFileBufferOffset% = bufferedFileBufferOffset% + bytesLeft%
END SUB

SUB
flushBuffer()
SHARED bufferedFileNumber%, bufferedFileBuffer$, bufferedFileBufferOffset%

IF bufferedFileBufferOffset% THEN
usedBuffer$ = LEFT$(bufferedFileBuffer$, bufferedFileBufferOffset%)
PUT #bufferedFileNumber%, , usedBuffer$
bufferedFileBufferOffset% = 0
END IF
END SUB

[/tt]
To start using these [tt]SUB[/tt]s, call [tt]SUB selectFileToBuffer[/tt] and pass it a file number opened by [tt]OPEN[/tt] in [tt]BINARY[/tt] mode, as well as a buffer size (4096 or 8192 should be about optimal -- smaller buffers cause degraded performance, but QB has trouble with strings larger than 8 kilobytes (it can do them, but the string space gets all fragmented and messy)).

Then, to output bytes or strings, call the appropriate output function (the routines remember the file number automatically). When you are done, call [tt]SUB flushBuffer[/tt] before closing your file, otherwise some data will not be written to file and will be lost.

You can switch from your own output to this buffered output at any time (provided the file was opened [tt]FOR BINARY[/tt]), and you can switch back to your own output code at any time by calling [tt]SUB flushBuffer[/tt], however doing so very often would reduce the effectiveness of the buffer and degrade performance. When switching back from your routines to the buffered routines after the buffered routines have already been in use, you do not need to call [tt]SUB selectFileToBuffer[/tt] again, as all the settings are remembered and the buffer is in a proper state from [tt]SUB flushBuffer[/tt]. It should also be safe to switch the buffer from one file to another (it automatically flushes the buffer), but the same performance issue applies.

I will be shortly posting this as a FAQ.

Good luck :)
 

Ok, heres my source, its a disaster, nothing defined,
looks like spaghetti etc. but its only an exersize.

CLS
INPUT "sourcefile (path+filename+extention) : ", source$
OPEN source$ FOR BINARY AS #1
IF LOF(1) = 0 THEN
PRINT ""
PRINT "File doens't exist, is already being used or has nothing contained."
END
END IF
PRINT ""
PRINT "File found, size is"; LOF(1); "Bytes."
PRINT ""
INPUT "targetfile (path+filename+extension) : ", target$
OPEN target$ FOR BINARY AS #2
IF LOF(2) > 0 THEN
PRINT ""
PRINT "File doesn't exist or is already being used."
END
END IF
PRINT ""
PRINT "File not found, press a kay to start the copy process."
PRINT ""
CLS
DO UNTIL a >= LOF(1)
GET #1, , portal
PUT #2, , portal
LET a = a + 4
LOCATE 1, 1
PRINT a; "\"; LOF(1)
LET b$ = RIGHT$(TIME$, 2)
IF b$ > c$ THEN
LET c$ = b$
LET e = d / 1024 * 4
LET d = 0
LOCATE 2, 1
PRINT e; "KBytes per second."
END IF
LET d = d + 1
LET f = LOF(1)
LET f = f - a
IF e <> 0 THEN LET f = f \ e
LET f = f / 1024
LOCATE 3, 1
PRINT &quot; estimated time left in seconds is&quot;; f
LOOP
PRINT &quot;&quot;
PRINT &quot;Copy process succesfully completed.&quot;
END
1 :
PRINT &quot;&quot;
PRINT &quot;Invalid input.&quot;
END

 
after reading logiclrd's post you should have a better idea why the code is slow. if you don't understand here's a simplified buffer:

BUFFER$ = STRING$(4096, 48)
GET #1,, BUFFER$

BUFFER$ = RTRIM$(BUFFER$)

which would bring in 4096 bytes from disk much much faster than reading it in one byte at a time, because as logiclrd said the smallest amount of data that the hard drive can read is a cluster size, so reading any less than that is a waste.

you can write the data out the same way using PUT.

that's just to help you understand because you did not acknowledge if you did in your post. sounds like you're going for speed so logiclrd's code is the best. my simple example is also inefficient because it fills a string with spaces so that when you read over the end (you're trying to read 4096 bytes and theres only 2000 left in the file) it can use RTRIM$ to clip the excess.









 
wait a sec - forget that RTRIM thing. I just pulled that out of nowhere, it would work for data that has no &quot; &quot;, which is usually what I'm working with, but is not a good idea for random data or your typical program data..
 

So,you can define the size of the buffer in wich you
transfer your data,thats only with a string? i used
a variable.
anyway i tried what you said but it seem to have the same
result as the variable had,so i doens't really make use
of the availeble space in the buffer. whats wrong?
 
hmm... i'd make a test program that just reads or writes with buffers so you can get the hang of it first before trying to use them to do anything. the PRINT statements in your loop will also slow it down a LOT.

you can read in any kind of variable of any size from disk. just think of it as a big mess of 1s are 0s. if you read in a byte, it would read one byte, if you read an integer it would read in 2, etc...
 
Like hardcor said, the printing will slow the operation considerably.

You dont really want all those calculations and printing every time through the loop.

Use two DO loops. In the first one, start copying bytes and time the number of chars in 1 second to get the transfer rate (BTW, where do you allocate a value to c$ ?). Then calculate the time left in secs (I'll call it remtime%). Print them out and exit the loop.

In the second DO loop, carry on copying chars till EOF.
In that loop include this:-
a%=TIMER
IF a%<>b% THEN
a%=b%: LOCATE 3,38:pRINT &quot; &quot;
LOCATE 3,38:pRINT remtime%
remtime%=remtime%-1
END IF
That will print out the remaining time every second
 
Actually, it makes a lot more sense to do the timing inline, something like this (pseudo-code):
[tt]
chunksLeft = number of chunks of data to process
chunksDone = 0
startTime = TIMER
DO WHILE
chunksLeft
chunk = getChuck(file) 'get one chunk from the file
process chunk 'do whatever
chunksLeft = chunksLeft - 1
chunksDone = chunksDone + 1
elapsedTime = TIMER
IF
startTime > elapsedTime THEN startTime = startTime - 86400
secs = elapsedTime - startTime
IF secs > 1 THEN ' skip processing if division will enlarge value
avgChunksPerSec = chunksDone / secs
avgSecsLeft = chunksLeft / avgChunksPerSec
percentDone = 100 * chunksLeft / (chunksDone + chunksLeft)
LOCATE , 1
PRINT USING &quot;Processing (###%) (### seconds remaining)...&quot;; percentDone; avgSecsLeft;
END IF
LOOP

[/tt]
This method generalizes to loops that write as well:
[tt]
chunksLeft = number of chunks of data to write
chunksDone = 0
startTime = TIMER
DO WHILE
chunksLeft
chuck = getNextChunk() 'retrieve the next chunk
write chunk 'store the chunk on disk
chunksLeft = chunksLeft - 1
chunksDone = chunksDone + 1
elapsedTime = TIMER
IF
startTime > elapsedTime THEN startTime = startTime - 86400
secs = elapsedTime - startTime
IF secs > 1 THEN ' skip processing if division will enlarge value
avgChunksPerSec = chunksDone / secs
avgSecsLeft = chunksLeft / avgChunksPerSec
percentDone = 100 * chunksLeft / (chunksDone + chunksLeft)
LOCATE , 1
PRINT USING &quot;Processing (###%) (### seconds remaining)...&quot;; percentDone; avgSecsLeft;
END IF
LOOP

[/tt]
 
I've used QB for processing large files in the past, and have used 4K blocks like hardkor1001110 suggests with good success, using the same algorithm that logiclrdposted above for processing the blocks after calculating how many blocks you're going to need to read in. Printing the results are fine if you want to watch what's going on, and have extra time, but, like they said, it takes a LOT longer to do that than just process the material and leave the screen alone.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top