Mar 91 Mousehole
Volume Number: | | 7
|
Issue Number: | | 3
|
Column Tag: | | Mousehole Report
|
RAM Cache and THINK C
By Larry Nedry, Mousehole BBS
From: Dave
Re: THINK C Compilation and RAM cache
Anyone know why my compiles get slower as my RAM cache gets bigger? There are multi-second pauses as THINK C reads header files when my cache is at 1024K, and this is without VIRTUAL!
From: Istewart
Re: THINK C Compilation and RAM cache
Well, if youre running under finder, it could be that youre reducing the available heap space by about a megabyte! Just a guess, but that may make the compiler work harder paging things to and from disk?
Unless you have no other use for a megabyte of memory, it seems a bit extreme to assign it to a cache!? Try 64 or 128K ...
From: Dave
Re: THINK C Compilation and RAM cache
Well, I have 5 meg on my machine, and I leave plenty of space for System, Finder, and THINK C to operate even when I have a 1 Meg cache. With all the header files Im including, I thought it would be a really good idea to cache them so they dont have to be read each time I compile a new file. Am I just being thick or is there something screwy here?
From: Istewart
Re: THINK C Compilation and RAM cache
Im assuming that youre referring to the cache that you set from the control panel ... (please correct me if Im wrong on that point!)
Heres my THEORY (for what its worth, someone out there please set me straight if Ive got it wrong):
The RAM cache saves a copy of the most recently accessed disk sectors. If its still there when the next request to read the same sector is processed, the OS can just use the copy in RAM instead of having to wait to read it from disk.
The RAM cache is smaller than the disk; once its full, itll purge out the least recently accessed sector to make room for the next one that has to be physically read from disk.
Several things will affect the efficiency of this process:
1) The type of access youre doing. If you read each sector once, and dont re-read it before it gets purged out, the buffer hasnt helped you at all. In fact, the overhead of maintaining it is your loss. If, however, youre repeatedly accessing a few sectors, they can all be kept in memory, and the saving should be substantial.
2) The size of the cache. If its too small, then blocks will be purged out too often without them being used. If its too large, then the system may spend more time searching for the sector in the buffer than it would have if it had just requested it from disk. Compounded with that, you may incur this overhead just to find that the sector ISNT in the cache, and then have to read it from disk anyway! This is most likely on long sequential accesses.
3) The relationship between the speed of the cache search and the length of time it would take to just read the sector directly from the disk. The cache search speed is affected by CPU speed and the efficiency of the search algorithm. If you have a slow search time and a fast HD, the point of diminishing (and subsequently negative) returns would be reached earlier than it would if you had a faster CPU or/and a slower disk.
In your case, I think the cache size is so big that its counter- productive. Youve got 1Mb of cache, enough for about 2000 512K sectors. If its doing a linear search of this, then its possible that its spending more time searching the cache than its saving you on disk accesses!
Thats my theory, for what its worth! Id suggest experimenting to find the optimum size for your setup. My own SE (4MB, slowish HD) has it set at 128K, and Im happy enough with that setting. I think my MacII at work (5MB, faster HD) is set to 128K also, though I use that mostly for WP and running a terminal emulator
(If anyone out there really KNOWS whats going on inside there, please let us know!! If I had to design it, Id keep a doubly linked list, with the most recently used sector on one end, and the least recently on the other. When it searched for a sector, it would start with the most recently used, progressing towards the least. To discard a sector to make room for a new one, it would simply drop off the one at the other end of the chain! Anyone have any better ideas?)
From: Dave
Re: THINK C Compilation and RAM cache
I have a better idea, I think. I would keep a fixed-size hash table for all blocks in the cache, reducing search time to a constant. Especially for compilation, where header files have a certain order of precedence, a least recently used cache replacement algorithm will just waste time if the cache is too small. I think I want an init that tries to do some special kind of caching for compilation.
From: Istewart
Re: THINK C Compilation and RAM cache
I thought of hashing, it would be great for finding a sector already in the cache. However, I had a problem figuring out how to determine the least recently used sector quickly.
Remember that the cache is general purpose - its not necessarily designed to optimize one specific type of task!
I wonder if anyones created an INIT that does anything more specific for compilation? I think I saw one on AOL that claimed to speed up something to do with Think C, but I never got further than the title, so I dont know how it works
From: Dave
Re: THINK C Compilation and RAM cache
Remember, LRU is not necessarily a good thing unless your cache is very large. Header Files have an implicit ordering on them (especially in object-oriented programs), because they often must include each other. If the cache is smaller than the total size of all headers, you will often replace the lowest files in the ordering at the end of compiling one C file, only to proceed to the next C file and replace the files you will need later. This leads to a vicious MISS-EVERY-TIME cycle. Think about it. You really want something like Most often used. Thats an easier criterion to meet.
From: Btoback
Re: THINK C Compilation and RAM cache
But most-often-used is the keep criterion, which means least-often used is the discard strategy. That is the same as least-recently used unless cache miss statistics are kept on the sectors that arent in the cache. For that to be useful, the cache has to be resettable, or cache statistics have to be kept on every sector on the disc. In practice, if LRU isnt good enough, the cache isnt going to help anyway.
From: Istewart
Re: THINK C Compilation and RAM cache
I remember that in my last post I pointed out that the cache is general-purpose, and not specifically designed for the type of access made by compilers.
I agree with you, this general-purpose scheme is not helpful when processing header files.
If I was devising a scheme specifically for this situation, it would probably be based on good old fashioned double buffering, if the hardware/OS will support asynchronous disk access, though I guess there must be plenty of approaches that can be applied in specific situations.
However, this is all speculation, and Im not about to write a compiler to check out my theory! Any volunteers out there???!
From: Dave
Re: THINK C Compilation and RAM cache
Sorry to flame, but I disagree. LRU is not at all the same as most-often-used, when long repeating sequences of sectors are loaded, that tend to have variations in the ends of the sequences. You could do much better than LRU if you cache by file, rather than sector, and you have a well-defined (small) set of eligible files (as for C Compilation).
From: Btoback
Re: THINK C Compilation and RAM cache
Hmmm... If youre referring to caching schemes that dont know about disc structure, we can try an experiment. We can write an INIT thatll record the location and size of all disc transfers, then do a compile and analyze the resulting data. If youre referring to caching schemes that know about the file system, that will be harder to do. Actually, though, a RAM disc, which is in effect a manually loaded file cache, might be the best solution.
From: Dave
Re: THINK C Compilation and RAM cache
I like your idea in general, but what do you mean by know about disc structure? The problem with a RAM disc is that programs under development often cause crashes => bye bye changes to your .h files!
But basically a RAM disk shadowed by non-volatile storage with SOME smarts is exactly what we want. I dont want to have to do an analysis and load the RAM disk by hand. The init might be a good first step toward the real thing, though.
From: Sonnyb
Re: Problems with C++
I have recently started to learn C++ and as a starting point keyed in the program that comes with the MPW C++ documentation. This program implements a class called BigInt. I noticed that every thing works well when keyed in exactly as written. However, if you declare a BigInt variable (eg BigInt c = a + b [a and b are previously declared BigInts]) every thing works fine. But if you place c = a + b on the next line, the program bombs with a message in Macsbugs about an attempt to deallocate a corrupt or unallocated structure. Looking at this with the debugger reveals that a dummy variable seems to be allocated when we use a previously defined BigInt the second time around. Is this a bug in C++ or am I missing something? Any help would be appreciated.
From: Btoback
Re: Problems with C++
The problem is that the way the class BigInt is built, you need to overload the assignment operator to have things work properly. Heres whats happening:
You have the class BigInt defined as
//1
class BigInt{
int fNumDigits;
char *fDigits; // Pointer to free store
}
You also have a constructor which allocates space for the digits in BigInt and saves a pointer to the storage in fDigits. Now, when you first declare BigInt c = a + b, where a and b are BigInts, the compiler allocates space for c, calls the constructor which allocates space for fDigits, and all is well. But next, you say
//2
c = a + 2; // a is a BigInt
Here, the compiler creates a temp to hold the BigIntified integer 2, adds a to it by calling the overloaded + operator, and then copies the temp to c. It then DELETES THE TEMP! The destructor deletes the storage pointed to by the temps fDigits field, and now cs fDigits field points to deleted storage. The next time C is used (in your sample, its when cs destructor is called at the end of the program), kaBOOM!
The solution is to overload the assignment operator:
//3
BigInt& BigInt::operator=(const BigInt& from)
{
int i;
char *p, *q;
fNumDigits = from.fNumDigits; // Copy the number of digits
delete fDigits; // Delete the old storage for the digits
fDigits = new char[fNumDigits]; // Allocate new storage for
p = fDigits; q = from.fDigits; // the digits and then copy
// them.
i = fNumDigits;
while (i--)*p++ = *q++;
return *this; // Return a pointer for the compiler.
}
The assignment operator properly allocates new private storage for the digits array, rather than having the compiler just copy the old pointer.
By the way, the declarations you uploaded didnt include a declaration for the print function, although you had the definition.
From: Sonnyb
Re: Problems with C++
Thanks a lot Bruce. This really makes what is happening quite clear. I will try and see how this works.
Again, thanks for all your help.
From: Atom
Re: C++ loads and MacApp
Does anyone know for sure why MABuildTool claims that C++ loads are inconsistent with NeedsFPU? If this is true it is bad news for me, as reading in the MacApp headers in text form adds over a minute onto each compile, and I need to use the FPU directly.
From: Btoback
Re: C++ loads and MacApp
The problem has been fixed in MacApp 2.0.1, according to the documentation I just received with E.T.O. disc #2. You should be getting yours very soon if you have it on order. Otherwise, I guess you need to call APDA.
From: Atom
Re: C++ loads and MacApp
Thanks much for answering. Im afraid I cant afford the E.T.O. subscription, especially not after digging deep to get the MPW Pascal compiler just so I could fix problems like this myself. Trouble is, I dont see the reason for the problem here. None of the options set by -NeedsFPU is incompatible with C++ dump/load as far as I can tell. Am I missing something, or is it OK to just comment out lines 1761-1766 of
MABuildTool.p:
IF fCPlusLoad & fNeedsFPU THEN
BEGIN
Echo(### MABuild: Warning: CPlusLoad and NeedsFPU are
incompatible. Using NoCPlusLoad.);
fCPlusLoad := FALSE;
END;
From: Btoback
Re: C++ loads and MacApp
They took out the lines you mentioned in 2.0.1, so I suspect it has something to do with the restriction that static initializers cant be included in the load/dump file. If you remove the lines, check for things in the MacApp C++ interfaces that are conditioned on qNeedsFPU. If you find some, and they contain static initializers, you might take the initializations out and place them in executable code (in InitUMacApp, for example).
MABuildTool was modified to prevent selection of CPlusLoad and NeedsFPU simultaneously because something downstream couldnt tolerate it. If you remove the restriction, do so with caution. Ill look through the MacApp 2.0 stuff later and see if I can find exactly what the problem may have been.
By the way, the upgrade to 2.0.1 is worth something if only for the new Mouser which displays documentation as well as code. It beats the heck out of the Class and Method Reference Stack -- its faster and more informative, and doesnt take up 750k like HyperCard does. Its now called MacBrowse, and the cute feline icon has been replaced with a generic icon. Stupid lawyers.
From: Btoback
Re: C++ loads and MacApp
1) TN#280 says that the problem is internal to CFront 3.1b3 and earlier. If you have a later version of CFront, go ahead and remove the safety check.
2) According to information in the MADA journal, the upgrade from MacApp 2.0 to 2.0.1 is free for purchasers of 2.0 Final. I havent verified that myself, but call APDA.
From: Atom
Re: C++ loads and MacApp
Thanks for checking on this one; now I can rest a little easier. Ill check out the free update rumor with APDA on Monday. Somehow a free update from Apple sounds out of character, but who knows...
From: Walrus
Re: MacApp and LSP
Anyone out there use THINK Pascal and MacApp together? Ive been working with it a little and Ive found it a little tweaky (it crashes sometimes). I just found this behavior and as soon as I can reliably reproduce this bug, Ill tell Symantec. Until then, Im just wondering if, perhaps this is more or less normal. Ive been doing this on an fx but I havent tried to reproduce this on another CPU. What seems to happen is that when it crashes and you restart the same project, every time you do a Go, it crashes. If you then do a Remove Objects and rebuild, it works okay until next time. Hmmmmm.
And that reminds me, why doesnt Symantec have a Customer Service section of their board so THEY can answer this stuff.
From: Atom
Re: MacApp and LSP
Ive had the same problem with THINK/MacApp (running on a II). Its a major nuisance, so much so that I stopped running inside the THINK environment long ago, building instead for the MacApp debugger. (Even unchecking the Smart Link box, this is still not much faster than MPW, and at least there you can switch away during long compiles.) Maybe its worth calling THINK tech support on this one just to make sure theyre aware of the problem. It could just be that project files are sometimes not completely updated before running inside THINK, or something of that sort. If you do call, Id be interested to hear what they have to say about it. For the time being, Im sticking with MPW.
From: Gremlin
Re: THINK Pascal Future...
I have a feature Id want THINK Pascal to have. Couldnt a procedure be attached to the RESET command. So that you could easily do any kind of disposal actions ?
The problem I have is signing off the MIDI Manager. If I dont do it before resetting the program, I get a crash the next time I open PatchBay (I suppose Patchbay tries to access my icon or something...)
So a little procedure that would be executed just before the actual reset action from THINK would be appreciable. I don know if theres a way to avoid the problem. I would appreciate any suggestions from the Mac, MIDI, Pascal gurus out there.
From: Photo
Re: Comm Toolbox info
I am writing a SCADA application and I have to fight with serial comm. I will need to to know where I can reach the function prototypes to use the comm toolbox as a library for my C code.
From: Btoback
Re: MacApp TEditText object
Does someone know how to predict exactly when a TEditText item will be refreshed? It sometimes is rewritten when I call SetText with Redraw set to true, but never is when I call SetSelection(0,32767,true). In fact, if I call the latter often enough, the text on the screen gets corrupted. Im calling Focus() before calling either of the two routines.
From: Bdkyle
Re: floating modeless dialogs
I went and used Brendan Murphys tool window manager from the November 90 MacTutor source disk. It is really neat. And I appreciate his contribution. He mentions that it is difficult to get a modeless dialog box to float. I must now agree.
Does anyone have any suggestions on how I can do it? Id like to enter some data into an edit box or use the mouse to move a graphical object around in a window underneath this floating edit box? Surely its been done. Any help. My appreciation.
From: Earthman
Re: real-time animation
I am interested in any information that comes to you concerning real-time animation update taking 3 frames. the way I understand it Apple 8-24GC would accelerate and QuickDraw calls including AnimatePalettes. I do not have an 8-24GC as of yet so I cant confirm.
From: Johnbaro
Re: real-time animation
I recently got some info directly from Apple DTS which should be useful to you (By the way, kids, dont try this at home if youre not a certified developer - it took several months and many, many phone calls.) According to DTS, SetEntries is called by AnimatePalette after it does all its palette things. SetEntries is the call responsible for actually changing the colors and, since it is not a QD call, an accelerator will not help. SetEntries does however wait for a vertical blanking period before doing anything, so sync tear should not be a problem. Also, according to DTS, AnimatePalette should take about 15 ms to execute, so they think I may have other things going on in my system (??). Actually, 30 ms is closer to what Im getting now (thats 2 retrace periods). One other thing they suggested, if speed is critical - call SetEntries directly - its much faster. Ive discovered however that dealing directly with CLUTs is not the same as dealing with palettes. With palettes I was drawing my image in the palette (its a 1-dimensional image) and then swapping palettes. If you do the same thing with a CLUT, strange things happen when you change CLUT entries. To get the speed I needed, I had to come up with a different scheme for drawing my images - each color can appear only once in a given CLUT. Other suggestions for increasing speed even more: instead of calling the trap, get the address first and then just jump to it (negligible improvement). Even lower level: call the driver itself (which is what SetEntries does). Refer to Designing Cards and Drivers for the Macintosh Family for info on the control call To SetEntries. See IM V (Color Manager) for info on SetEntries. Hope this helps. Let me know if you need additional info.
From: Xander
Re: Mac Classic
I have a strange problem with a Mac Classic. The Classic is on a small AppleTalk net with a Plus and a Laserwriter Plus. The Classic will crash when attempting to print (rarely) or crash when a new floppy (800k) is inserted, a folder on the floppy is opened. All the operations connected with the floppy that cause it to crash have something to do with reading/using the desktop file. If a floppy causes a crash, it will be fairly consistent that that floppy will cause a crash until the desktop on it is rebuilt. This is actually the second Classic (the first was replaced) that has done this. Both were using 6.0.7. I installed 6.0.4 on the second one and it crashed on the next disk inserted. Any suggestions on what I should check next?
From: Atom
Re: Mac Classic
Have you checked the floppy causing the crashes for the WDEF virus?
This sucker masquerades as a WDEF ID=0 resource in the Desktop file, and caused symptoms on my Mac II very much like the ones you describe. If you havent ruled out this possibility already, please let us know what you find.
From: Xander
Re: Mac Classic
Well, yes, it is the WDEF virus. The virus checkers we use dont appear to catch that one. It would seem that I need to get something a little newer. Thank you for the assist!
From: Atom
Re: Mac Classic
You might try GateKeeper Aid, a freeware INIT by Chris Johnson. Version 1.02 will detect and eradicate the WDEF virus when an infected floppy is mounted. I assume later versions will as well. I believe a recent version is available at this BBS.
From: Jlenski
Re: AppMaker
Thanks for your reply. I ordered a demo of AppMaker and have played with it for the last month. It should help me out a lot. I have been working with THINK Pascal and the THINK class library trying to absorb object programming. I think I am getting the hang of this but the whole class library is a bunch to try to remember to build an application. I like being able to concentrate on what the program is supposed to do and not mess with all the interface details. I do some work on IBM PCs and the thought of using one development environment for both machines is very appealing. Anyway, I am planning to buy AppMaker. It does seem worth what they are asking.