TweetFollow Us on Twitter

Client-Server
Volume Number:10
Issue Number:3
Column Tag:From The Trenches

True Life Story

Developing a Client-Server system

By Malcolm H. Teas, Rye, New Hampshire

Note: Source code files accompanying article are located on MacTech CD-ROM or source code disks.

About the author

Malcolm Teas has been programming the Macintosh for five years in C with MPW and Think C. He’s active with object programming in TCL because he has a short attention span and likes to write applications quickly. His most recent shareware is Apple Π, a Π calculation program available on America Online. He lives, works, and consults from his house on the seacoast of New Hampshire. Seacoast Software, 556 Long John Road, Rye, NH 03870-2213, mhteas@aol.com or mhteas@well.sf.ca.us

In the spirit of all those TV shows with “real-life” themes, and temporarily short of real article ideas, I proposed an article on the true-to-life story of how I recently helped develop a client-server system. They probably accepted this article because I used the words “Case History” in my proposed title, which sounds like I’m knowledgeable. Of course I might have just caught them on an good day.

In any case, the plot for this story takes two programmers (including myself) and pits them against the clock and programming bugs to implement a system that the customer needed for a legal deadline. Our customer decided that, while it’d be alright if I wrote about this, they didn’t want their name and business included, therefore, they’ll remain “our customer” to preserve their competitive secret. Their business, of course, should never be associated with air conditioning equipment. You never heard it from me.

What the customer needed

Our customer must, by law, track the intelligent use of their product by their large customers. To do this, there are customer representatives that visit their customers, survey them, then file papers on the surveys. For various reasons these surveys may be audited periodically, so, unlike the rest of us, they must actually be able to find the papers they’ve filed over the last several years.

Our customer had been doing this for some time, not completely successfully, with paper, filing cabinets, and lots of clerks. The Macintoshes were used to generate paper and keep the clerks busy. Although the Macs were connected in a large network, they weren’t taken advantage of for this sort of purpose. Pages were printed on a laser printer, photocopied, then one copy was filed locally and the other was US-mailed to an archiving office.

The problems with this system were substantial. To start with, it was slow. Papers often became misfiled, were on someone’s desk for audit work and so unavailable to others, or got lost in the mail. The US Mail costs were starting to get expensive. It was difficult to analyze the reports in another order from the original filing order. In short, the existing system was unwieldy.

Our Client-Server design

The management that brought us in wanted to use the Macintosh more fully. The customer was using MS-Mail and file servers throughout the company successfully and was interested in making better use of the Macs and their network with a “network-enabled application” or “netware” as they called it. They saw this project as a way to move toward that.

We came up with a client-server system that uses a 4D database as the server application, and a client application written in Think C. Documents are archived by dragging their icons onto the client application icon. The client app starts up, allows the user to specify the archiving criteria, sends the document to the server, and quits.

Since far fewer people need to retrieve documents than need to archive them, we used simple file sharing on the server to allow people to retrieve documents. The client front-end application allows a wider range of less technically sophisticated people to archive documents correctly. A user doesn’t need file sharing privileges to archive a document. Later, in release two, we changed this because the number of users was expected to grow substantially.

The client application has one window with popups on the left side to specify the state, division, and year that the document should be filed under. On the right side of the window are two scrolling lists: the customer’s name and number; and the customer’s location. The customer’s name list contents is determined by the criteria on the left side. The contents of the customer’s location list is determined by the left side criteria and the selected customer.

There are fields that show the currently selected customer and location. Some checkboxes allow new customer’s names, numbers, and locations to be added. This addition takes place when the document is actually filed. The document is filed when the “Archive” button in the lower right of the window is clicked.

These are the essential elements of the client application. It has no menu bar. The application is basically a one-shot. The window (in the first version) is actually a dialog.

What’s done where

We found that the main design issue was how to divide the functionality between the client and server applications. We had to decide this first; the system design was too unwieldy otherwise. Once determined, the messages between the two were defined and each piece became modularized.

Our server is a simple one that receives, stores, and retrieves data for the client. A client-server architecture is good for sharing common data with more than one user. The server handles the common part and the client creates the interface to it. We needed the server to store the documents and the criteria (customer information, state, division, etc.) used to index them.

User interaction and interface, on the other hand, is best handled on the client. After all, a program with one user is faster than a program with multiple users. This, ultimately, is the reason behind moving from mainframe-centered systems to client-server systems. Although speed isn’t always an issue in the user interface (after all, humans can take a long time - up to several hundred milliseconds - to recognize that something changed on the screen, much less understand it), we can use the processing time to format and display the data in useful ways for the user. The client application also tries to “condition” the data sent to the server so the server doesn’t have to handle as many error conditions. This is no excuse for the server programmer to forget to program error detection and recovery. We’re trying to save server execution time, not make the server less robust.

Different kinds of messages

Once the customer and the other filing criteria are specified, we need to send the document to the server, and the server must file it. This interchange of messages while the user is waiting must be handled quickly, but the document sending doesn’t need to be handled interactively. It could be processed by the server up to minutes later. Because of this difference between the two types of messages we needed to handle, we used two messaging methods.

Custom AppleEvents became the interactive message protocol and programmatic MicroSoft Mail became the non-interactive message protocol. This was convenient since we could easily enclose documents in the MS-Mail message to send them to the server. In the second release, we were to extend these messages to do document and report retrieval from the server.

We built custom AppleEvents to: establish communication with the server, get the customer number list, get the list of customer locations for a customer number, and tell the server to add a new customer number or location. We used the MS-Mail message to send the document to the server. The server could, at it’s discretion, defer processing of this document if AppleEvent messages were coming in. We gave interactivity a higher priority.

Note that a true client-server system has no real knowledge of a “session”. A session is the sort of communications protocol that happens when you log onto AppleLink or America Online for example. You’re connected until you log off and, more importantly, whatever happens depends on what’s happened before. But a client-server system exchanges complete messages. Whatever the server does with a message is completely determined by the contents of that message from the client. There’s no explicit “state” memory of what the client’s done before as there is with a terminal session.

Only one of our messages came close to abusing the pure client-server architecture. The AppleEvent message to establish communication from the client to the server was used to trade version numbers between the two. After all, we wanted to plan for future versions with different messages; this allowed us to detect a client and server with different versions trying to talk to each other. Other messages assumed that this was already established.

Real Life - design’s great, but how do we do it?

We didn’t have much time to design and build this. We were given two months, and of that time we were to design, build, and have a user test of the system before it went into production. This meant that we needed to restrict the design to only the absolutely necessary elements and to build those as quickly as possible.

This constraint help us decide to use 4D (version 2.2.3) for the server and Think C for the client. 4D uses a higher level language. While it proved easier to build database code with, the user interface you can build with it isn’t as flexible. The interface you can build with 4D doesn’t follow the Apple Human Interface Guidelines very closely. In addition, like a lot of specialized higher-level languages, it’s good at what it’s designed for, but of limited use in a more general application. Fortunately, the server application didn’t have or need much of a user interface. It needed to work with data and communicate. The former was what 4D was designed for, the latter we added with externals (extensions in code resources) from third parties. The server uses 4D externals written by third parties to get and send messages with the client application. (We were not using the 4D Server from ACI US, just standard 4D with externals.)

The client application needed to be written in a more flexible language than 4D. It turned out to be the more complex application of the two; user interface code often is since it needs to deal with a wider range of possibilities. We chose Think C (version 5) for this since it’s fast compile/link/build cycle would help us meet our time goal.

Another factor in the logistics are the people. The person writing the server application is quite experienced in 4D and less familiar with C. As the author of the client, I’m quite the opposite. We picked development environments that played to our strengths. This was very important in such a short-cycle development project. Partly due to our experience, we both had code samples and snippets that we reused in our respective development environments. The reused code was already written and tested, so it also sped up the process.

One of the problems of client-server development is that to develop either part, it helps to have the other part already running. After all, an unstable messaging interface is a key component that can slow development. We solved this by having the server application development lag the client. I initially developed the client with a “virtual” server. The client’s messaging routines checked whether a global variable was set. If it was, the routines faked the expected response of the server. If not, they talked to the real server. As our messaging interface was largely defined beforehand, we knew what to expect.

Later, as the server application was developed, we could test it against the already running client application. Although there were errors in both sides, the bulk of the client was already written and running. This left us free to concentrate on the messaging and server development.

Adding bells and whistles

Security was a feature that we, as developers, were interested in. The users weren’t concerned with this, actually, we had to talk to them quite a while to convince them to use passwords. We didn’t want them to accidentally lose something and come back to us saying “why didn’t you think of that?”. One of the things we get paid for is to think of these things ahead of time. We also added keywords in the messages that the server checks for. If these keywords aren’t in a message, the server ignores the message. It’s a little harder to spoof the system this way.

Once we’d made the decision to use System 7’s file sharing as the method of retrieving documents, our major security features were already implemented. The AppleEvents use the same security as the file sharing. Since we require that the user use the same MS-Mail ID as the file sharing user name and that the user already be logged on to MS-Mail, the security there is already taken care of too. While the user name being the same in the file sharing and the mail system may seem onerous, it’s not really as our customer already has this requirement to simplify their system management.

One last required feature was an autosearch of the customer number list to do auto-completion. The client application has two lists on the right side of it’s window that show the customer and their location. Above each of them are fields that indicate the current customer and location selected from the list. If the user types in one of these fields, the client program searches the list for the closest match. If it finds only one match, it fills out the rest of what you would’ve typed. If it finds more than one match, it moves the list to display the first matched item (the list is kept in sorted order).

System capacity

Client-server systems can vary on several grounds: Computation/request, size (in bytes) of the request, amount of expected requests per unit of time, and number of clients serviced by the server. These, naturally, interact. If the server has a lot of work to do for each request (or the average request), then the amount of requests it can process is lower. Luckily for us, the parameters of this system were quite nice. A low request rate, low computation overhead per request, and small (comparatively) number of users. This let us run the server (for the first version) on an SE/30.

The second version is being rolled out to the whole USA. We’re anticipating a rather larger number of users. However, we’ve got information from the first release to allow us to better estimate the load. One parameter for us that’s important is the disk space used. We expect that to be quite high. The initial release helped us to estimate that better.

We decided that there are two ways of estimating these parameters, either the peak method or the average method. Each is better for different parameters. For example, you wouldn’t use an average method for the disk space needed, you’d need the peak estimate there - and a generous one too. However, if the server couldn’t respond as quickly as it should, that wouldn’t be terrible. So, the average method could be used to estimate the needed CPU capacity of the server.

Putting version one into production

The system went together rather quickly. Less than two months after starting development, a couple of us drove to the customer’s pilot office to bring the client application to the first users and train then. We promptly ran into a culture clash.

There was no problem getting the users trained. However, as predicted, they didn’t want to use passwords. We’d designed the system so that most anyone could use it - including the representatives that visited the customers. We didn’t know, though, that the office culture was such that the representatives didn’t actually touch the keyboard. They dictated the reports, clerical staff took the taped dictation, created the documents, and filed them. Now, using the client application to file the documents, we had a very few heavy users instead of a larger number of occasional users.

In any case, the customers loved the system. One month later we received a letter praising the system and “its on-time, under-budget development” that met all their needs.

The things that helped us make this a quick project were: it’s clear, focused project definition, our ruthless approach to feature creep, and our ability to reuse existing source code. Without the focused project definition, we would’ve gotten lost in message definition problems, and issues like “what feature goes where” debates. When new features came up to be discussed, our approach was usually negative. Now, it isn’t fun to be a killjoy, but if your goal is to get the thing out the door, then you’ve got to have a ruthlessly pragmatic approach: will this feature add enough benefit to compensate for the time delay? Bear in mind that estimates of development time and benefits may not be accurate either; you have to factor in risk adjustments, too.

Reusing existing code is something that should be done more often. It’s like walking in seven-league boots. Imagine that you’re a carpenter. Suddenly you’re told that because you’d built one bookshelf, you’d never have to build another. You could sell that same one over and over. Why, you’d be overjoyed! But many developers neglect to scavenge a project when they’ve finished for reusable source code pieces. Perhaps that’s the difference between just a programmer and a real software engineer.

So, why a second version?

If the customer liked it so much, why do another version? Like many complex systems, it’s hard to know exactly what’s needed before hand. Also, some features we dropped out earlier when we’d gotten too ruthless on feature-creep needed to go back in. The customer wanted several things: to get lots of its U.S. offices using this, drag-n-drop of multiple documents, better reports, and most significantly, they wanted to track the activities of the representatives.

Sealing the system

To use this system in all of their offices, we’d need automatic document retrieval. After all, permitting file sharing access to a few people is one thing. A larger group is quite another, especially with the need to archive the files automatically. We could improve security and simplify the system’s management with an automatic document retrieval feature. Document deletion would still be manual however. Since this is an archival system, we didn’t want to make that part easy. We decided that, in large part, the system would be “sealed” against easy file-sharing access.

The hands-on users wanted some changes made to the interface for ease of use. We hadn’t anticipated the pattern of use which led them to want to drag-n-drop multiple files. They also wanted some bigger fields so they could see longer document names. These and other, similar features would make the system far more usable. While these were little features, they were essential to the day-to-day users.

If we got graphics, use graphics!

The activity tracking was the least well-defined feature. After some work, we ended up with a flowchart of the activities that a representative goes through to inspect a customer. Some of this flowchart was defined by the legal guidelines, some by the company’s guidelines. After some faltering attempts at a user interface for this, we put the flowchart horizontally in a window that scrolled from left to right.

This seems to be working out well. It shows the information in the way that the representatives and other users think of it, and clarifies the relationships between the items in the flowchart. Each item has a box with a checkbox as its title, a date field for the deadline date, and another for the actual date for finishing the item. When the item is done, the checkbox is checked. The deadline date is calculated automatically, and the actual date is filled in by the user. When the deadline date is getting near, the box’s edge changes to red and becomes bold. This gives the user time to do something before the deadline arrives.

The users also wanted to generate reports on all of their customers at once. The reports, document retrieval, and activity information all seemed to fit together and didn’t seem to fit in the existing client application, so we designed a new client for the same server that handled these new functions.

Going to Objects

Technically more important, we moved from C to the subset of C++ used in Think Class Libraries (TCL). This meant that the original client (the Archiver) needed rewriting too due to it’s changes. Its structure was complex enough that additions were quite difficult and there were a lot of internal dependencies. Rewriting in in TCL would permit us to redesign these out. This would also allow us to re-use much of the same code in the new client (called the Monitor). We chose TCL because it was part of the development environment we were already using. While MacApp has good points - better control over segmentation for example - the overhead of MPW was too great.

In mapping out the hierarchy of objects for the Archiver and Monitor, I used a simplified form of the Booch method of design. I could simplify the method since I was the only one doing client application development. The view hierarchy was fairly obvious in design while the internal data hierarchy was less so. Often, in a client application, there’s a mirroring between the view objects, those that make the visible elements of the interface, and the internal objects that maintain the data for the application. Another method is to build the view in the way that suits the interface, and the internals in the way that the data is best built.

Unfortunately, this wasn’t as clear to me then as it is now. After close to ten years of building programs in C, I was able to design easily “on the fly”. Not that I was ever that lazy, but I did tend to keep my designs rather informal. This is less easy in OOP design. It’s more difficult to go back and reshape existing objects. After all, the point is to encapsulate the information they need. This information includes the design information for that object and class. If you have to go back, you’ve forgotten too much already.

I’m not saying that you can’t go back and modify, but your system’s architecture and structure is more important in OOP than in procedural programming. Extra time up front on OOP design isn’t wasted. I believe it’s essential. The lack of good OOP design tools is also a factor; better tools would make this process easier, but they aren’t a cure-all, either. The mindset for OOP design and programming is different than that for procedural programming.

If you were building a car and you didn’t have standardized parts, you could custom-craft the necessary parts as you went. As long as you know how to build each part as you come to it, and as long as you know overall what you want to build, custom-crafting parts isn’t a problem. This is analogous to procedural programming - it takes longer, but there’s no concern with standardized parts. However, to produce a number of similar cars, you’d want standard parts. To use them, you need a more detailed design so you can know what to use when. It’s a trade-off between design and greater flexibility. This isn’t to say that OOP is bad; quite the opposite. The greater flexibility with custom programming isn’t usually needed. I strongly prefer the OOP approach.

Part of the OOP design problem is figuring out just what an object is. Using the TCL helped in this respect. Objects were already defined. I could usually sub-class something to specialize it’s operation to what I needed. This reduced the issue of deciding what operations and data to encapsulate in a object. This issue of deciding what an object is can be quite important. After all, an object is the software representation of a design concept. Do it right and the design is written in code easily, do it wrong and the development is difficult and schedule-busting.

We re-wrote the Archiver and developed the Monitor (a more complex application) in a little over three months. As I write this, we’re just past user test. We made some bug fixes and small changes, and are now ready to implement across the nations. This development was significantly faster than the prior version, even though I felt I could’ve done the design better.

I re-wrote the Archiver first. Some code I ported from the prior version, but most of it was new. Generally, the code I ported were algorithms that I made into methods. I could’ve also simply called regular “C” code from the methods. That approach would be good for a collection of interrelated “C” routines. The approach I used was better because the original routines were largely concerned with user interface and not operations. The TCL takes care of the user interface features either by itself or by you sub-classing existing objects.

We’d specifically designed the Monitor to be similar in user interface to the Archiver. This allowed me to reuse many of the Archiver’s objects in the Monitor, so that sped up development significantly.

What would I do again?

The step-wise approach to client-server development with the fake server layer in the client was clearly something I’d repeat. Also, 4D makes a good server for this kind of architecture. However, if the traffic to the server were significantly higher, we’d have to reconsider this.

I’d definitely repeat the OOP development. The lucky opportunity to do much the same thing in C and in TCL was useful in that in gave me a clear comparison. I prefer the TCL, that way I can concentrate on writing the interesting code, not the same stuff over and over.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Posterino 4.4 - Create posters, collages...
Posterino offers enhanced customization and flexibility including a variety of new, stylish templates featuring grids of identical or odd-sized image boxes. You can customize the size and shape of... Read more
Chromium 119.0.6044.0 - Fast and stable...
Chromium is an open-source browser project that aims to build a safer, faster, and more stable way for all Internet users to experience the web. List of changes available here. Version for Apple... Read more
Spotify 1.2.21.1104 - Stream music, crea...
Spotify is a streaming music service that gives you on-demand access to millions of songs. Whether you like driving rock, silky R&B, or grandiose classical music, Spotify's massive catalogue puts... Read more
Tor Browser 12.5.5 - Anonymize Web brows...
Using Tor Browser you can protect yourself against tracking, surveillance, and censorship. Tor was originally designed, implemented, and deployed as a third-generation onion-routing project of the U.... Read more
Malwarebytes 4.21.9.5141 - Adware remova...
Malwarebytes (was AdwareMedic) helps you get your Mac experience back. Malwarebytes scans for and removes code that degrades system performance or attacks your system. Making your Mac once again your... Read more
TinkerTool 9.5 - Expanded preference set...
TinkerTool is an application that gives you access to additional preference settings Apple has built into Mac OS X. This allows to activate hidden features in the operating system and in some of the... Read more
Paragon NTFS 15.11.839 - Provides full r...
Paragon NTFS breaks down the barriers between Windows and macOS. Paragon NTFS effectively solves the communication problems between the Mac system and NTFS. Write, edit, copy, move, delete files on... Read more
Apple Safari 17 - Apple's Web brows...
Apple Safari is Apple's web browser that comes bundled with the most recent macOS. Safari is faster and more energy efficient than other browsers, so sites are more responsive and your notebook... Read more
Firefox 118.0 - Fast, safe Web browser.
Firefox offers a fast, safe Web browsing experience. Browse quickly, securely, and effortlessly. With its industry-leading features, Firefox is the choice of Web development professionals and casual... Read more
ClamXAV 3.6.1 - Virus checker based on C...
ClamXAV is a popular virus checker for OS X. Time to take control ClamXAV keeps threats at bay and puts you firmly in charge of your Mac’s security. Scan a specific file or your entire hard drive.... Read more

Latest Forum Discussions

See All

‘Monster Hunter Now’ October Events Incl...
Niantic and Capcom have just announced this month’s plans for the real world hunting action RPG Monster Hunter Now (Free) for iOS and Android. If you’ve not played it yet, read my launch week review of it here. | Read more »
Listener Emails and the iPhone 15! – The...
In this week’s episode of The TouchArcade Show we finally get to a backlog of emails that have been hanging out in our inbox for, oh, about a month or so. We love getting emails as they always lead to interesting discussion about a variety of topics... | Read more »
TouchArcade Game of the Week: ‘Cypher 00...
This doesn’t happen too often, but occasionally there will be an Apple Arcade game that I adore so much I just have to pick it as the Game of the Week. Well, here we are, and Cypher 007 is one of those games. The big key point here is that Cypher... | Read more »
SwitchArcade Round-Up: ‘EA Sports FC 24’...
Hello gentle readers, and welcome to the SwitchArcade Round-Up for September 29th, 2023. In today’s article, we’ve got a ton of news to go over. Just a lot going on today, I suppose. After that, there are quite a few new releases to look at... | Read more »
‘Storyteller’ Mobile Review – Perfect fo...
I first played Daniel Benmergui’s Storyteller (Free) through its Nintendo Switch and Steam releases. Read my original review of it here. Since then, a lot of friends who played the game enjoyed it, but thought it was overpriced given the short... | Read more »
An Interview with the Legendary Yu Suzuk...
One of the cool things about my job is that every once in a while, I get to talk to the people behind the games. It’s always a pleasure. Well, today we have a really special one for you, dear friends. Mr. Yu Suzuki of Ys Net, the force behind such... | Read more »
New ‘Marvel Snap’ Update Has Balance Adj...
As we wait for the information on the new season to drop, we shall have to content ourselves with looking at the latest update to Marvel Snap (Free). It’s just a balance update, but it makes some very big changes that combined with the arrival of... | Read more »
‘Honkai Star Rail’ Version 1.4 Update Re...
At Sony’s recently-aired presentation, HoYoverse announced the Honkai Star Rail (Free) PS5 release date. Most people speculated that the next major update would arrive alongside the PS5 release. | Read more »
‘Omniheroes’ Major Update “Tide’s Cadenc...
What secrets do the depths of the sea hold? Omniheroes is revealing the mysteries of the deep with its latest “Tide’s Cadence" update, where you can look forward to scoring a free Valkyrie and limited skin among other login rewards like the 2nd... | Read more »
Recruit yourself some run-and-gun royalt...
It is always nice to see the return of a series that has lost a bit of its global staying power, and thanks to Lilith Games' latest collaboration, Warpath will be playing host the the run-and-gun legend that is Metal Slug 3. [Read more] | Read more »

Price Scanner via MacPrices.net

Clearance M1 Max Mac Studio available today a...
Apple has clearance M1 Max Mac Studios available in their Certified Refurbished store for $270 off original MSRP. Each Mac Studio comes with Apple’s one-year warranty, and shipping is free: – Mac... Read more
Apple continues to offer 24-inch iMacs for up...
Apple has a full range of 24-inch M1 iMacs available today in their Certified Refurbished store. Models are available starting at only $1099 and range up to $260 off original MSRP. Each iMac is in... Read more
Final weekend for Apple’s 2023 Back to School...
This is the final weekend for Apple’s Back to School Promotion 2023. It remains active until Monday, October 2nd. Education customers receive a free $150 Apple Gift Card with the purchase of a new... Read more
Apple drops prices on refurbished 13-inch M2...
Apple has dropped prices on standard-configuration 13″ M2 MacBook Pros, Certified Refurbished, to as low as $1099 and ranging up to $230 off MSRP. These are the cheapest 13″ M2 MacBook Pros for sale... Read more
14-inch M2 Max MacBook Pro on sale for $300 o...
B&H Photo has the Space Gray 14″ 30-Core GPU M2 Max MacBook Pro in stock and on sale today for $2799 including free 1-2 day shipping. Their price is $300 off Apple’s MSRP, and it’s the lowest... Read more
Apple is now selling Certified Refurbished M2...
Apple has added a full line of standard-configuration M2 Max and M2 Ultra Mac Studios available in their Certified Refurbished section starting at only $1699 and ranging up to $600 off MSRP. Each Mac... Read more
New sale: 13-inch M2 MacBook Airs starting at...
B&H Photo has 13″ MacBook Airs with M2 CPUs in stock today and on sale for $200 off Apple’s MSRP with prices available starting at only $899. Free 1-2 day delivery is available to most US... Read more
Apple has all 15-inch M2 MacBook Airs in stoc...
Apple has Certified Refurbished 15″ M2 MacBook Airs in stock today starting at only $1099 and ranging up to $230 off MSRP. These are the cheapest M2-powered 15″ MacBook Airs for sale today at Apple.... Read more
In stock: Clearance M1 Ultra Mac Studios for...
Apple has clearance M1 Ultra Mac Studios available in their Certified Refurbished store for $540 off original MSRP. Each Mac Studio comes with Apple’s one-year warranty, and shipping is free: – Mac... Read more
Back on sale: Apple’s M2 Mac minis for $100 o...
B&H Photo has Apple’s M2-powered Mac minis back in stock and on sale today for $100 off MSRP. Free 1-2 day shipping is available for most US addresses: – Mac mini M2/256GB SSD: $499, save $100 –... Read more

Jobs Board

Licensed Dental Hygienist - *Apple* River -...
Park Dental Apple River in Somerset, WI is seeking a compassionate, professional Dental Hygienist to join our team-oriented practice. COMPETITIVE PAY AND SIGN-ON Read more
Sublease Associate Optometrist- *Apple* Val...
Sublease Associate Optometrist- Apple Valley, CA- Target Optical Date: Sep 30, 2023 Brand: Target Optical Location: Apple Valley, CA, US, 92307 **Requisition Read more
*Apple* / Mac Administrator - JAMF - Amentum...
Amentum is seeking an ** Apple / Mac Administrator - JAMF** to provide support with the Apple Ecosystem to include hardware and software to join our team and Read more
Child Care Teacher - Glenda Drive/ *Apple* V...
Child Care Teacher - Glenda Drive/ Apple ValleyTeacher Share by Email Share on LinkedIn Share on Twitter Read more
Cashier - *Apple* Blossom Mall - JCPenney (...
Cashier - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Blossom Mall Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.