![]() ![]() It would drastically reduce the memory requirements and apparent access time, but it would be more difficult to code. It's just a way to display a limited section of the file at a time, reusing the same memory and storing byte offsets to move backwards. I've got a sort of 'scrolling buffer' memory allocation system in mind. I don't like how long I have to wait for this application to load correctly. I'll have try something else (Modal dialog box, for example) to warn the user in the release build. I've also heavily commented the problem, and I'll include it in the documentation.ĮDIT: Looks like verify isn't going to work as I wanted it to. I've thrown in an assert (verify, actually) to make the program fail in a tracable way if it starts approaching a dangerous timestring. It's no accident that I produce very few bugs in my code.īelieve me I'm not happy with this approach, either. I can count the number of times that I've blindly allocated too much without knowing exactly what the most I needed was on one hand. I'm sorry, but it seems too much like you're advocating ignorance about the program's memory needs for me to agree with you. >slitting your wrists trying to find the bug.Īnd if that isn't enough for all cases? You'll have created an even more subtle bug, and what's worse than slitting your wrists trying to find a bug is not even knowing that the bug is there. >Just allocate one or two more bytes than really needed will save At least that way you will understand your memory needs better than if you just add "a few more bytes" until the problem appears to go away. It's better to figure out exactly how much your upper limit will be and if you feel the need, allocate that much memory for all cases. If you take the "memory is cheap" approach throughout the system, a few more bytes in every routine can add up. True, but you still have to be practical about it. >allocating a few more bytes of memory than needed. >"memory is cheap" means you don't have to be too stingy about Of course, those don't seem to be in the libraries available to me, either. ![]() I also don't know what the maximum size will be on the time difference (from the start), so I can't use snprintf or scprintf. ![]() How can I fix this without relying on placement of code? I don't know when the time will roll over to 100 because of the data's nature. This way, the function writing array1 can't be overwritten by the already-completed array2 sprintf, and array2 has been placed at the end of the struct so that it won't be able to overflow into another variable. I have managed to get this under control by placing the sprintf and its array as shown above. If the time in array2 gets too large, it overflows the buffer and corrupts the variable that follows it. Function that writes to array 1 goes here (also using sprintfs, but with enough of a buffer that the danger is minimized, or something.) Sprintf(&(foo.array2), "%9f", diff/(1.0e6)) //This overwrites by one byte. My code approximates to this: struct foobarįoo.array2 = '0' foo.array2 = '0' foo.array2 = ':' But in this new world of GUI, it's a problem, particularly because the array (here, array2) is a time difference that increases in length, and doesn't have a problem before it passes 100 seconds. This problem was ignored in the command line version because the same area was overwritten each time. Also, ensure that format is not a user-defined string. Consider using the related function _snprintf, which specifies a maximum number of characters to be written to buffer, or use _scprintf to determine how large a buffer is required. Security Note There is no way to limit the number of characters written, which means that code using sprintf is susceptible to buffer overruns. My code was getting crunched after a certain point because of an sprintf overwriting the first byte of the array right after it.Īfter some research, I found something interesting and irritating. I spent a few hours grappling with data corruption until I found something interesting. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |