TweetFollow Us on Twitter

Performance Tuning
Volume Number:10
Issue Number:7
Column Tag:Powering UP

Performance Tuning

Touch that memory, go directly to jail, or at least to a wait state

By Richard Clark, General Magic and Jordan Mattson, Apple Computer, Inc.

The new Power Macintosh systems are definite price/performance winners: they run most 68K software at Quadra speeds, and recompiled code runs two to four times faster than a top of the line Quadra. Yet simple recompilation isn’t enough if you want the best possible performance. Getting the best speed out of a Power Macintosh program requires performance tuning: identifying and changing the sections of code which are holding back performance.

This month’s Powering Up looks at the common actions which rob Power Macintosh programs of their maximum performance. We’ll start with a discussion of how reading and writing memory affects the speed of PowerPC code, take a detour through the Power Macintosh performance analysis tool, look at some specific performance enhancement techniques, and come back to an older 68K trick which doesn’t work so well anymore. Armed with this information, you should be well on your way to tuning your own PowerPC code.

Remember this!

If you only remember one thing from this article, remember this: in almost every PowerPC program, memory accesses affect performance the most. Compare this to the 68K, where reducing the number and complexity of instructions is the goal, and you’ll see that you’ll have to use some different techniques for tuning PowerPC code.

Why should memory accesses be such a big deal on the PowerPC? First, the PowerPC is designed to be clocked at high speeds, but high speed memory is expensive and hard to get. You might have noticed that Power Macintosh systems use the same 80ns SIMMs as 33MHz 68040 and 80x86 systems. Since the PowerPC can easily issue requests for memory in less than 80ns, Power Macintosh systems use several techniques to connect the fast PowerPC chip to slower RAM.

One technique involves running the memory bus at a slower speed than the processor, and asking the processor to wait every time it needs a memory location. This is often called “inserting wait states” into each memory access. (The first Power Macintosh models run the bus at 1/2 the processor clock speed, so a 66MHz system has a 33MHz bus.) This technique isn’t unique to the Power Macintosh - most commercial systems insert at least a single wait state into every memory access, and some “clock doubled” microprocessors connect to the outside world at one speed but multiply the clock speed by two before feeding it to the microprocessor’s logic.

Since the system’s RAM can’t supply information as quickly as the processor wants, every PowerPC chip contains a built-in “memory cache.” A cache is a block of memory placed between the processor and some slower external device, so a memory cache is a block of extremely fast memory placed between the processor and regular RAM, a disk cache is a block of memory between the disk drive and the processor, and so on. If the processor has to read the same location multiple times, subsequent reads can come from the “cached” copy of the information, which leads to dramatic improvements in performance. (A standard 601 PowerPC chip contains a single 32KB cache internally; in practical terms, a read from the cache on one typical configuration takes about 1/4 the time of a read from external memory, although this is dependent on clock rates.)

The cache doesn’t help so much when writing values to memory. Imagine a scenario where the PowerPC changes location 1 from the value 0 to 1. If the write only went as far as the cache, the cached copy of location 1 would read “1” while the copy in RAM would contain “0”. This leads to a conflict: the processor sees one value at location 1, while external devices such as the disk controller see the old value! Systems designers can use several techniques to avoid such “stale data”, but the simplest and most practical method involves a “write-through cache”. (Technically speaking, Apple uses “copy-back” for most of RAM. The video buffer is the only major exception.) In this model, when the PowerPC writes to location 1, it changes both the cached copy and the location in RAM. Since the write involves external memory, the cache’s speed advantage goes away.

The PowerPC memory cache, like most memory caches, is currently organized into “lines” of 32 bytes each (future systems may be different. On a 601, a line is 64 bytes with two 32-byte sectors). The first time a program accesses one memory location, the chip actually loads 32 bytes (or 8 instructions) into the cache. So, reading a byte at address 0x01 actually loads locations 0x00 through 0x1F into the cache. Organizing the cache this way not only simplifies the logic a great deal, but makes sense since both data and program memory accesses tend to occur in “clusters”.

Since the cache holds several contiguous locations, the PowerPC can use “burst mode” when reading memory. In this mode, the microprocessor supplies an address, then keeps asking for the “next” location. Since the memory chips only have to decode the first address, then simply move to the next memory cell, RAM can supply the series of contiguous locations much more quickly than if each location was read separately.

The other major reason that memory accesses affect the PowerPC so dramatically is that the PowerPC (like other RISC chips) uses many simple instructions which were designed for efficient execution. The chip wants another instruction every clock cycle or two (giving the cache a real workout), and was built around the assumption that most work will be done inside the registers. If a program places multiple reads or writes to external memory in a row, the entire chip will have to wait (multiple reads have a different behavior on the 603 and 604 to diminish this effect).

Improving the performance of PowerPC code

Now that we know that memory accesses have a dramatic affect on PowerPC performance, we’ll look at some specific examples. Let’s assume that you have a 68K application which you’ve just ported to the PowerPC. (If you actually have such an application, give yourself a pat on the back!) Your new application is fast, but you expected it to be even faster. What can you do?

Look for inefficient code.

As in any application, the efficiency of your algorithms plays a large role in the performance of your application. Fortunately, the Macintosh Debugger for PowerPC (aka M.D. for PowerPC) supplied as a part of Apple’s Macintosh on RISC Software Development Kit includes a performance analyzer for PowerPC code.

The analyzer is an “adaptive sampling profiler”. The analyzer maintains a series of “bins” (also known as buckets), each one representing a successive chunk of the memory address space. The analyzer “samples” the program counter at regular intervals and finds the bin whose range covers where the PC is, and increments that bin’s counter (see Figure 1.) Over time, this sampling can detect where the processor is spending most of its time, though the sampler can only say that the time was spent inside one or more bins. But what if a bin contains four or five routines? How do you find the culprit then?

Figure 1

That’s the problem with ordinary sampling profilers - they can lack precision. The Adaptive Sampling Profiler addresses this problem by “splitting” full bins into a series of smaller buckets as it runs. This allows the sampler to “adapt” to the code being tested by covering busy areas with very small bins while leaving the large bins for less active areas. (see Figure 2)

Figure 2

To start the profiler, you’ll need to start the Macintosh Debugger with a .SYM file, and have the Debugger’s “nub” installed on your Power Macintosh. After you’ve stopped at a breakpoint (or held down the <Command> key when launching the PowerPC application), you select “New Session” from the “Performance” menu to start a new profiling session. (See Figure 3) This command tells the debugger nub to allocate a block of memory for the “buckets” and set their values to 0, installs a recurrent interrupt handler which records the current value of the Program Counter when triggered, and gives you a “results” window back in the Macintosh Debugger. You can also “configure” the nub to tell it how often to sample the program counter, which could improve accuracy but will also speed up or slow down the overall system performance.

Figure 3

You then need to tell the nub when to begin profiling. The Macintosh Debugger offers two options for this - the “Enable Utility” menu command in the “Performance” menu or special breakpoints which turn sampling on or off when hit. The menu command is a good choice when profiling a program initially, and the breakpoints are useful when you’re trying to test one or more isolated sections of code. (See Figure 4) We also recommend setting one or more “regular” breakpoints at the end of your program just so you have an easy way to get back into the debugger and collect the performance statistics.

Figure 4

After you’ve enabled the sampler and run your program for a while, you’ll want to look at the results. Selecting the “Gather Report” command from the “Performance” menu causes the Debugger Nub to dump the contents of its buckets back to the host machine. (This can take a while, so be patient!) After the bins have been collected, the debugger can match bin addresses with the addresses of routines in memory (taken from the .SYM file) to show which bins cover which routines. The top part of the performance analysis window shows some summary statistics (overall time spent in your application, the ROM, the PowerPC shared Libraries, and in Mixed Mode), while the bottom part contains a histogram showing the contents of each bin. If the bin spans part of your code, it should be labeled with the name of one of your routines (or a range of addresses if it spans multiple routines); if a bin spans part of the toolbox, it may be labeled with the name of a toolbox routine, assuming that you’ve loaded the “ROMInfo” file that tells the Macintosh Debugger the name and location of each ROM routine. (See Figure 5)

Figure 5

Limit Mixed Mode switches.

There’s one major problem with the Adaptive Sampling Profiler - most of the time wasted in a PowerPC application isn’t in the application or in the toolbox. Instead, Mixed Mode switches account for most of the “wasted” time in Power Macintosh code. Remember that switching between 68K and PowerPC code involves moving parameters between the stack, emulated 68K registers, and PowerPC registers. All of these memory accesses take time - an average of 500 PowerPC instructions worth per Mixed Mode round trip. As a result, if you only port part of your application, you should port frequently called routines and the code that calls these routines.

Be careful when calling emulated traps from PowerPC code.

Even if you ported your entire application, you might encounter Mixed Mode switches, since the less frequently used parts of the Macintosh use emulated 68K code. (This means that the ROM contains both 68K routines, and PowerPC routines. Of course, the PowerPC routines begin with routine descriptors so that 68K code can call them without incident.) Whenever some PowerPC code calls a toolbox routine, it actually calls some code in the “interface library”, which uses NGetTrapAddress to get a universal procedure pointer to the toolbox routine and then uses CallUniversalProc to complete the call.

As a result, calling an emulated trap (such as WaitNextEvent) from native code will trigger a Mixed Mode switch. Thus, if your code is calling the Event Manager (or a SpinCursor library which calls SetCursor, or some other low-usage trap) frequently inside of a calculation loop, you should change your code. Instead of calling the event manager every n times through the loop, call it at a fixed time interval, as shown below:


/* 1 */
int i;
long nextTime;

nextTime = TickCount() + 15;// Now + 15/60 second (1/4 second)
for (i = 0; i <= 10000; i++) {
 if (TickCount() >= nextTime) {
 GiveAwayTime(); // my routine to spin the cursor &
 // call WaitNextEvent
 nextTime += 15; // wait another 1/4 second before
 // giving away time
 }
}

(Incidentally, you might think that the above call to TickCount is wasteful. Not at all! It’s inexpensive on a Power Mac.)

Be careful about patching traps.

Another side-effect of the way that traps are called is that a patch might insert extra Mixed Mode switches into a trap call. For example, assume that _Read is emulated. If you install a system extension which patches _Read, your machine’s performance could decrease! Think about the case where a 68K application calls an un-patched _Read: we have 68K code calling 68K code, so no mixed mode switch occurs. Now assume that you’ve installed a PowerPC patch onto _Read: the system has to switch to PowerPC to call your patch, then to 68K to call the original trap. That’s a lot of work for one small patch!

If you install a 68K-only patch, the problem doesn’t go away, since you might have a PowerPC application calling your patch (though you only get one Mixed Mode switch, from PowerPC to 68K.) The preferred solution is to never patch traps on the Power Macintosh. Still, if you must patch (and there are valid reasons to patch), you can create a “fat patch” which contains both 68K and PowerPC code, with a Routine Descriptor which points to both. (You install a pointer to the Routine Descriptor - a Universal Procedure Pointer - into the trap table using NSetTrapAddress.)

Finally, there are two problems which even fat patches can’t solve: “split traps” and selector-based traps. “Split” traps are minor utility routines (such as AddPt) which would be overwhelmed by the overhead of the trap dispatching mechanisms and which never really need to be patched. These routines are implemented as 68K code in the ROM and as PowerPC code in the Interface Library, which allows 68K code to access the 68K copy efficiently, and the PowerPC code to use the function without going through interface glue. Since the ROM-based versions of these routines are never called from PowerPC code, a patch on one of the “split traps” will only apply to 68K code calling the trap.

Selector-based traps present a different problem. A selector-based trap (such as _Pack8 for Apple events) implements several different routines with a single trap. When a program calls the trap, the program passes a “selector code” which specifies the desired routine and the rest of the parameters. However, each routine implemented by the trap may have a different parameter list, which makes building the appropriate Routine Descriptor amazingly difficult. (It’s impossible in some cases!) In general, a selector-based trap has to be patched using 68K code which calls individual PowerPC routines (each one of which begins with its own Routine Descriptor.) Because of this, you are guaranteed a performance problem when patching any accelerated selector-based traps. So, again we’ll reiterate: the preferred solution is to never patch traps on the Power Macintosh.

The hardest news about working with traps is that you never truly know whether you are calling an “accelerated” (PowerPC) trap, an emulated trap, or a “split trap.” Inside Macintosh doesn’t say which traps are which. A clever programmer could figure out which traps are which in a given machine with a given version of the system, but, as in the rest of the system software, these details are subject to change over time, and the presence of split traps make an otherwise easy task (of telling an emulated trap from an accelerated one) extremely difficult; a “split trap” looks just like any other 68K trap in the ROM.

Let the hardware help you.

Many applications run into performance problems when writing to an I/O device such as the serial ports or disk. It’s not a matter of the Macintosh device manager being inefficient (it isn’t), but rather speed limitations inherent in disks, modems, printers, and other peripheral devices. You can work around these limitations by using asynchronous I/O so that the program can do other work while reading or writing to a peripheral device and by transferring information in larger blocks (for example, writing 2KB blocks instead of 256 byte blocks.) Incidentally, these techniques also apply on many of Apple’s newer 68K machines which support Direct Memory Access for I/O operations. [Asynchronous programming techniques have long offered improved overall system performance for networking software, and those techniques now can apply to file i/o as well. It’s a technique worth mastering - Ed stb]

Look at your parameter passing.

Application developers can control the number of times a program accesses memory by changing the ways that parameters get passed. In general, if one PowerPC routine is calling another, it’s better to pass multiple parameters into a function’s parameter list than it is to pass a parameter block, and it’s better to put floating-point parameters at the end of a parameter list than at the beginning.

The reason for this lies in the calling conventions (see last month’s Powering Up): 8 general-purpose registers are set aside for the first 8 words of parameters (regardless of type), and 13 floating-point registers are set aside for the first 13 floating-point parameters. Any parameters which can’t be passed in registers get written to the stack, and that slows your program down.

For example, the routine:


/* 2 */
void GoodRoutine (int a, int b, int c, int d,
 double d1, double d2, double d3, double d4)

receives parameters a through d in general-purpose registers 3 through 
6, and the parameters d1 through d4 in floating-point registers 1 through 
4. If this were a “leaf routine”, it  could be called without any writes 
to memory!  
However, a simple rearrangement of the parameters leads to a completely 
different result:

void BadRoutine (double d1, double d2, double d3, double d4,
 int a, int b, int c, int d)

In this example, we’ve placed the floating-point parameters before the four integers. Since each “double” is two words (8 bytes) long, these four doubles fill up the first 8 words of the parameter list. Remember that the General Purpose registers are mapped to the first 8 words of the parameter list regardless of type, so the following integers don’t wind up in registers, and have to be passed on the stack instead. This routine will definitely take more time to call than the first one!

Of course, there’s an exception to every rule. If you’re calling a function using Mixed Mode, passing a pointer to a parameter block could be better than passing several individual parameters, because that’s fewer parameters that Mixed Mode has to move around when making the call.

Check your data structure alignments.

As we mentioned way back in Powering Up #1 (January, 1994), the PowerPC can access “aligned data” more quickly than “misaligned data.” The PowerPC considers a 2-byte value to be “aligned” if it begins on an even address, a 4-byte value is aligned if its address is a multiple of 4, and an 8-byte value (a floating-point double) should begin on an address which is a multiple of 8. Many RISC chips require that all data be aligned in this way, but the PowerPC is more lenient - it can read and write “misaligned” 2, 4, and 8-byte values, but such operations require multiple accesses to memory and can take two to four times as long as an aligned access.

The PowerPC compilers enforce data structure alignments automatically by inserting “padding” into data structures to bring each element into alignment. For example, in the structure:


/* 3 */
struct sample {
 short a; // 2 bytes
 long b;  // 4 bytes
};

A PowerPC C compiler would normally insert a 2-byte “filler” between “a” and “b” so that “b” begins at offset 4.

“If this is so automatic, where’s the problem?” The problem occurs when you have structures that are shared between PowerPC and 68K code. If you passed the above structure to a 68K C compiler, it would be 6 bytes long instead of the 8 bytes allocated for PowerPC. 68K compilers follow a different set of data structure alignment rules than PowerPC compilers do. So, you have to tell the PowerPC compiler to use 68K alignment rules for such “shared” data structures.

Now, it’s fine if you only have to change the alignment of a few data structures to accommodate the toolbox or other 68K code. However, many developers just give the “use 68K alignment” flag to the PowerPC compiler when building their applications and ignore the whole issue. This is a bad idea - we’ve seen sample applications and benchmarks speed up by a factor of two when PowerPC structure alignments were used instead of 68K alignments. This is only going to become more critical in future PowerPC chips.

In the future, you can solve this problem by grouping structure fields by side, and placing the largest fields first. You could make the above example operate identically on PowerPC and 68K (without any performance hits) with a little rearrangement:


/* 4 */
struct sample {
 long b;  // 4 bytes
 short a; // 2 bytes -- no filler before!
};

Should I take the bypass?

There’s one 68K-specific trick that you should forget about completely when moving to the PowerPC - bypassing the trap dispatcher. Developers who call one or more toolbox routines in a tight loop often want to eliminate the overhead of the trap dispatcher, so they use NGetTrapAddress to get a pointer to a trap and call it directly. This is slower on the Power Macintosh than using the Interface Library. There are 2 reasons why this is a bad idea: split traps, and the way the Interface Library works.

If you called NGetTrapAddress to get the address of a split trap, you’ll get the address of the 68K version in ROM. Calling this trap requires a full Mixed Mode switch, which will be much slower than calling the native version in InterfaceLib directly.

Even if a trap isn’t split, the InterfaceLib may call it more quickly than you can. Apple’s engineers were able to use some very efficient trap calling code in the InterfaceLib - code which is specialized for particular sets of parameters, and so is more efficient than the general Mixed Mode calling mechanism.

Next month in “Powering Up”

Up to now, we have been looking at how to bring an existing Macintosh application to the Power Macintosh and maximize its performance. Next month we will look at how you can use the performance of Power Macintosh to enhance the user interface of an existing Macintosh application or create a enhanced user interface for a Power Macintosh application.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Firetask Pro 4.2.2 - Innovative task man...
Firetask Pro uniquely combines the advantages of classical priority-and-due-date-based task management with GTD. Stay focused and on top of your commitments - Firetask Pro's "Today" view shows all... Read more
Bookends 13.4.3 - Reference management a...
Bookends is a full-featured bibliography/reference and information-management system for students and professionals. Bookends uses the cloud to sync reference libraries on all the Macs you use.... Read more
LibreOffice 6.4.5.2 - Free, open-source...
LibreOffice is an office suite (word processor, spreadsheet, presentations, drawing tool) compatible with other major office suites. The Document Foundation is coordinating development and... Read more
Thunderbird 68.10.0 - Email client from...
As of July 2012, Thunderbird has transitioned to a new governance model, with new features being developed by the broader free software and open source community, and security fixes and improvements... Read more
Firefox 78.0.1 - Fast, safe Web browser.
Firefox offers a fast, safe Web browsing experience. Browse quickly, securely, and effortlessly. With its industry-leading features, Firefox is the choice of Web development professionals and casual... Read more
BetterTouchTool 3.389 - Customize multi-...
BetterTouchTool adds many new, fully customizable gestures to the Magic Mouse, Multi-Touch MacBook trackpad, and Magic Trackpad. These gestures are customizable: Magic Mouse: Pinch in / out (zoom)... Read more
Slack 4.7.0 - Collaborative communicatio...
Slack brings team communication and collaboration into one place so you can get more work done, whether you belong to a large enterprise or a small business. Check off your to-do list and move your... Read more
OsiriX Lite 11.0.3 - 3D medical image pr...
OsiriX Lite is an image processing software dedicated to DICOM images (".dcm" / ".DCM" extension) produced by medical equipment (MRI, CT, PET, PET-CT, ...) and confocal microscopy (LSM and BioRAD-PIC... Read more
Wireshark 3.2.5 - Network protocol analy...
Wireshark is one of the world's foremost network protocol analyzers, and is the standard in many parts of the industry. It is the continuation of a project that started in 1998. Hundreds of... Read more
Dabble 1.6.1 - Organize your manuscript,...
Dabble organizes your manuscript, story notes, and plot. Dabble simplifies the story, leaving more room in your brain to create, which is what being a writer is really about. Organize your story.... Read more

Latest Forum Discussions

See All

Pokemon Go's July Community Day wil...
Pokemon Go developers have announced the details concerning the upcoming Gastly Community Day. This particular event was selected by the players of the game after the Gas Pokemon came in second place after a poll that decided which Pokemon would... | Read more »
Clash Royale: The Road to Legendary Aren...
Supercell recently celebrated its 10th anniversary and their best title, Clash Royale, is as good as it's ever been. Even for lapsed players, returning to the game is as easy as can be. If you want to join us in picking the game back up, we've put... | Read more »
Detective Di is a point-and-click murder...
Detective Di is a point-and-click murder mystery set in Tang Dynasty-era China. You'll take on the role of China's best-known investigator, Di Renjie, as he solves a series of grisly murders that will ultimately lead him on a collision course with... | Read more »
Dissidia Final Fantasy Opera Omnia is se...
Dissidia Final Fantasy Opera Omnia, one of Square Enix's many popular mobile RPGs, has announced a plethora of in-game events that are set to take place over the summer. This will include several rewards, Free Multi Draws and more. [Read more] | Read more »
Sphaze is a neat-looking puzzler where y...
Sphaze is a neat-looking puzzler where you'll work to guide robots through increasingly elaborate mazes. It's set in a visually distinct world that's equal parts fantasy and sci-fi, and it's finally launched today for iOS and Android devices. [... | Read more »
Apple Arcade is in trouble
Yesterday, Bloomberg reported that Apple is disappointed in the performance of Apple Arcade and will be shifting their approach to the service by focusing on games that can retain subscribers and canceling other upcoming releases that don't fit... | Read more »
Pixel Petz, an inventive platform for de...
Pixel Petz has built up a sizeable player base thanks to its layered, easy-to-understand creative tools and friendly social experience. It revolves around designing, trading, and playing with a unique collection of pixel art pets, and it's out now... | Read more »
The King of Fighters Allstar's late...
The King of Fighters ALLSTAR, Netmarble's popular action RPG, has once again been updated with a plethora of new content. This includes battle cards, events and 21 new fighters, which increases the already sizeable roster even more. [Read more] | Read more »
Romancing SaGa Re;univerSe, the mobile s...
Square Enix latest mobile spin-off Romancing SaGa Re;univerSe is available now globally for both iOS and Android. It initially launched in Japan back in 2018 where it's proven to be incredibly popular, so now folks in the West can finally see what... | Read more »
Away: Journey to the Unexpected is a sto...
Away: Journey to the Unexpected looks really quite lovely. Stylish, cute, and clearly heavily inspired by Japanese animation, it's amongst the best-looking mobile games on the horizon. Developed by a two-person team, this story-driven rogue-lite... | Read more »

Price Scanner via MacPrices.net

July 4th Sale: Woot offers wide range of Macs...
Amazon-owned Woot is blowing out a wide range of Apple Macs and iPads for July 4th staring at $279 and ranging up to just over $1000. Models vary from older iPads and 11″ MacBook Airs to some newer... Read more
Apple Pro Display XDR with Nano-Texture Glass...
Abt Electronics has Apple’s new 32″ Pro Display XDR model with the nano-texture glass in stock and on sale today for up to $144 off MSRP. Shipping is free: – Pro Display XDR (nano-texture glass): $... Read more
New 2020 Mac mini on sale for up to $100 off...
Amazon has Apple’s new 2020 Mac minis on sale today for $40-$100 off MSRP with prices starting at $759. Shipping is free: – 2020 4-Core Mac mini: $759 $40 off MSRP – 2020 6-Core Mac mini: $998.99 $... Read more
July 4th Sale: $100 off every 2020 13″ MacBoo...
Apple resellers have new 2020 13″ MacBook Airs on sale for $100 off Apple’s MSRP as part of their July 4th sales. Starting at $899, these are the cheapest new 2020 MacBooks for sale anywhere: (1) B... Read more
This hidden deal on Apple’s site can save you...
Are you a local, state, or federal government employee? If so, Apple offers special government pricing on their products, including AirPods, for you as well as immediate family members. Here’s how... Read more
Apple Watch Series 3 models on sale for new l...
Amazon has Apple Watch Series 3 GPS models on sale for $30 off MSRP, starting at only $169. Their prices are the lowest available for these models from any Apple reseller. Choose Amazon as the seller... Read more
Deal Alert! Get these refurbished 2018 13″ Ma...
Apple has restocked and lowered prices on select Certified Refurbished 2018 13″ MacBook Airs, starting at only $679. Each MacBook features a new outer case, comes with a standard Apple one-year... Read more
July 4th Sale: 13″ 2.0GHz MacBook Pros for $2...
B&H Photo has new 2020 13″ 2.0GHz MacBook Pros on sale for $200 off Apple’s MSRP as part of their July 4th sale. Prices start at $1599. These are the same MacBook Pros sold by Apple in their... Read more
July 1 only: $100 off Apple iPhone 11, 11 Pro...
Boost Mobile is offering Apple iPhone 11, 11 Pro, and iPhone 11 Pro Max models for $100 off MSRP with service. Their discount reduces the cost of an iPhone 11/64GB to $599, iPhone 11 Pro to $899 for... Read more
Apple offers $50-$100 Education discount on i...
Purchase a new 12.9″ or 11″ iPad Pro at Apple using your Education discount, and Apple will take $50-$100 off their MSRP. All teachers, students, and staff of any educational institution with a .edu... Read more

Jobs Board

Operating Room Assistant, *Apple* Hill Surg...
Operating Room Assistant, Apple Hill Surgical Center - Full Time, Day Shift, Monday - Saturday availability required Tracking Code 62363 Job Description Operating Read more
Perioperative RN - ( *Apple* Hill Surgical C...
Perioperative RN - ( Apple Hill Surgical Center) Tracking Code 60593 Job Description Monday - Friday - Full Time Days Possible Saturdays General Summary: Under the Read more
Product Manager, *Apple* Commercial Sales -...
Product Manager, Apple Commercial Sales Austin, TX, US Requisition Number:77652 As an Apple Product Manager for the Commercial Sales team at Insight, you Read more
*Apple* Mac Product Engineer - Barclays (Uni...
Apple Mac EngineerWhippany, NJ Support the development and delivery of solutions, products, and capabilities into the Barclays environment working across technical Read more
Blue *Apple* Cafe Student Worker - Pennsylv...
…enhance your work experience. Student positions are available at the Blue Apple Cafe. Employee meal discount during working hours. Duties include food preparation, Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.