Performance By Design A blog devoted to Windows performance, application responsiveness and scala

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Monday, 17 December 2012

Inside the Windows 8 Runtime, Part 1

Posted on 11:17 by Unknown

This is the next installment in a series of blog posts on the recent Windows 8 release that I began a few months back. In the last entry I expressed some reservations about the architectural decisions associated with the new Windows Runtime API layer. In this post and the ones that follow, I will provide more detail about my concerns as we look inside the new Windows Runtime layer. But, first, we will need some background on the native C language Win32 API, COM, and the Common Language Runtime (CLR) used in the .NET Framework. Collectively, these three facilities represent the run-time services available to Windows application prior to Windows 8. As I mentioned in the earlier posts, the new Windows Runtime layer in Windows 8 is a port of a subset of the existing Windows Win32 run-time to run on the ARM hardware platform.

Windows Run-time Libraries

Run-time libraries in Windows provide an API layer that applications running in User mode can call to request services from the OS. For historical reasons, this runtime layer is known as Win32, even when a Win32 service is called on a 64-bit OS. A good example of a Win32 runtime service is any operation that involves opening and accessing a file stored somewhere in the file system (or the network, or the cloud). Application programs require a variety of OS services in order to access a file, including authentication and serialization, for example.
Today, the Win32 API layer spans 100s of dlls and contains hundreds of thousands of methods that Windows applications can call. One noteworthy aspect of the Win32 runtime libraries is how long they need to persist, due to the large number of Windows applications that depend on them. Software endures. The extremely broad Win32 surface area creates a continuing obligation to support those interfaces, or else risk introducing OS upgrades that are not upwardly compatible with earlier versions of the OS.

Historically in Windows, language-specific run-time libraries were provided to allow applications to interact with the OS to perform basic functions like accessing the keyboard, the display, the mouse, and the file system. By design, the standardized “C” language was kept compact to make it more portable, but was intended to be augmented by C runtime libraries for basic functions like string-handling. To support development in C++, for example, Microsoft provided the Microsoft Foundation Class library, or MFC, which included a set of object-oriented C++ wrappers around Win32 APIs, originally developed using the C language. When you consider that Win32 APIs provide common services associated with Windows user interface GUI elements like windows, menus, buttons and controls, dialog boxes, etc., you can imagine that the scope of MFC was and is quite broad.
MFC also incorporated many classes that were not that closely associated with specific OS services, but certainly made it easier to develop applications for Windows. Good examples of these ancillary classes include MFC classes for string handling, date handling, and data structures such as Arrays and Lists. Having access to generic C++ objects that reliably handle dates and other calendar functions or implement useful data structures, such as lists of (variable length) strings, greatly simplifies application development for Windows.

COM Objects.

The set of technologies associated with the Microsoft Component Object Model (COM) brought an object-oriented way to build Windows applications. COM originally arose out of a need to support inter-process communication between Windows applications of the type where, for example, drag-and-drop is used to pull a file object from the Windows Explorer app and plop it into the active window of a desktop application. The most intriguing aspect of COM is that the programming model is designed explicitly to support late binding to objects during run time, something that is essential for a feature like drag-and-drop to work across unrelated processes. Late binding to COM objects allowed for the construction of components whose behavior was discoverable during run-time, something which was very innovative.
At the heart of COM is the IUnknown interface, which all COM classes must support. The IUnknown interface has three methods:  QueryInterface, AddRef and Release. The AddRef and Release methods are used to manage the object’s lifetime. QueryInterface is used by the calling program to discover whether the COM object it is communicating with supports a contract that the caller understands. (See, for example, this documentation for a discussion of the QueryInterface mechanism for late binding: http://msdn.microsoft.com/en-us/library/ms687230(v=vs.85).aspx.) 

Because COM objects are discoverable during run-time, COM development also enabled third party developers to add component libraries of their own design. Component libraries are packaged as DLLs and installed onto the Windows machine. Originally, COM components also required an entry in the Windows registry to store a CLSID guid, a globally unique identifier that is used in calls to instantiate the COM component during runtime. If you use the Registry Editor to look for CLSIDs stored under the HKEY_CLASSES_ROOT key, you will typically see thousands of COM objects that are installed on your system, as illustrated in Figure 1. (Typically, you will also see several versions of the same – or similar – COM objects installed. This is the only good way to deal with versioning in COM.)

Figure 1. The CLSIDs of installed COM objects that are available at runtime are registered under the HKEY_CLASSES_ROOT key in the Windows Registry.

Note that beginning in Windows XP, the requirement for COM objects to use the registry was relaxed somewhat; the CLSID and other activation metadata can now also be stored in an XML-based assembly manifest instead, but only in certain cases.
After years of object-oriented programming (OOP) proponents advocating the use of standardized components in application development, COM technology proved beyond a doubt the efficacy of this approach. COM changed the face of software development on Windows, leading to the development of a wide range of third party component libraries that extended the MFC classes and opening up Windows software development significantly. COM components packaged as ActiveX controls could easily be added to any application development project. For example, it became commonplace for Windows developers to use third party ActiveX controls to give their application a similar look-and-feel to the latest Microsoft version of Office. In my case, instead of trying to develop a charting tool for a performance data visualization app from the ground up, I licensed a very professional Chart control from a third party component library developer and plugged that into my application.

Limitations of the COM programming model.

As innovative as the COM programming model was and how useful the technology proved to be in extending the Windows development platform, aspects of the late-binding approach used in COM came to be seen as having some decidedly less than desirable qualities. I will mention just three issues here:
  1. complex performance considerations,
  2. memory leaks, and
  3. the reality of dynamically linking to a complex object-oriented interface.
Developing well-behaved COM objects turns out to be quite difficult, forcing developers to deal with potentially complex performance considerations such as threading, concurrency, and serialization alternatives. COM objects can live either in-process or out-of-process in separate COM Server address spaces. (In COM+, COM objects can even be distributed across the network.) The runtime infrastructure to support late binding to all types of COM objects at runtime is quite complex, but at this point, of course, it is deeply embedded into the Windows OS.

Software developers also discovered that applications built using persistent COM objects were prone to memory leaks in often devious, difficult-to-diagnose, non-obvious ways. Since COM objects could be shared across multiple processes, the COM object itself had responsibility for managing object lifetimes using reference counting – implemented using the AddRef and Release methods of the IUnknown interface. Keeping track of which ActiveX objects are active inside your program and making sure inactive ones are de-referenced in a timely fashion can be complicated enough. Nesting one ActiveX control inside another ActiveX control, for instance, can create a circular chain of reference to those objects that defeats reference counting. When that happens, the objects in the chain can never be destroyed, and they can accumulate inside the process address space until virtual memory is exhausted.
Finally, it turned out the idea that all the property settings and methods available for an object embedded in your application be discoverable during runtime, which sounded good on paper, wasn’t that useful a construct for dealing with a complex interface. As a practical matter, crafting code to deal with a complicated interface to a COM object that you could bind to dynamically creates just as many dependencies as any statically linked object. Another very pointed objection to late binding is that it can lead to a variety of logic errors that are only discoverable during runtime testing. On the other hand, many of these same type-mismatch (or invalid cast) errors could readily be detected during compile-time using static binding to strongly typed objects.

Building components in the .NET Framework


This criticism of the COM programming model was taken to heart by the architects of the .NET Framework. The Framework languages – mainly C# – was designed from the ground-up using OOP principles like inheritance and polymorphism, and adopted the best practices associated with software engineering quality initiatives like the Ada programming language. Unlike the C++ language, which was grafted on top of the C programming language, the Framework languages dispense with the use of address pointers entirely. Pointers to object instances are maintained internally, of course, but cannot be referenced directly by user code – except for the express purpose of interoperability with COM objects and Win32 API calls.
Object instance reference counting is internalized as well in the Framework languages, which enables the .NET Common Language runtime (CLR) to periodically clean-up de-referenced pointers automatically using garbage collection. Automatic garbage collection is one of the key features of the .NET managed code environment. Memory leaks of the type associated with the kinds of housekeeping logic errors that tend to plague programs written utilizing COM objects are eliminated in one fell swoop. (To be sure, memory leaks of other types can still occur, and dealing with an automatic garbage collection procedure can sometimes be tricky. See, for example, this MSDN article “Investigating Memory Issues” from CLR developer Maoni Stephens, or some of the online documentation that I wrote here that discusses typical memory management problems that .NET developers can encounter with suggestions on how to deal with them effectively.)

 “Strongly-typed” objects


Furthermore, the architects of the .NET Framework adopted an approach diametrically opposed to COM for building component libraries. The .NET Framework relies on static binding to strongly typed .NET components, something which permits the compiler to detect type mismatches. “Strongly typed” in this context means that the code references an explicit class, i.e., one derived from System.Object, which permits the compiler to detect type mismatches. .NET is very strict about implicit type conversions – they are not permitted. Your code must use explicit casts or reference one of the Convert class library methods – .NET provides something to convert from any base type to any other base type.
To be sure, the .NET Framework does support dynamic binding to objects during runtime under specific circumstances. These include using Reflection and the is and as keywords. It is also often necessary for .NET applications to communicate with unmanaged code, which includes programs written in languages such as Javascript that rely on dynamic binding. (The 4.0 version of the Framework added the dynamickeyword that instructs the compiler to bypass type-checking to help with Javascript interop issues.)
Meanwhile, Windows itself and most Office apps still rely heavily on COM. The primary way the .NET Framework deals with interoperability with Win32 APIs and COM objects that pass pointers around is wrappers that create .NET classes around these APIs and COM objects. For example, instead of calling the QueryPerformanceCounter Win32 API to gather high-precision clock values in Windows, C# developers instantiate a Stopwatchobject instead and call its methods. Structs are still permitted in C#, and they are unavoidable when you are dealing with interop for Win32 APIs that don’t already have wrapper classes, a .NET feature known as platform invoke, or PInvoke, for short. If you need to call a Win32 API that is adorned with address pointers, developers can often get help from the pinvoke.netwiki, but, more often than not, as in the case of the Stopwatch class, there is likely to be a .NET wrapper already available.
Memory management in a C# program that does a fair amount of interop with native code and COM objects is also complicated since the CLR garbage collector cannot automatically free up memory associated with COM objects that have been abandoned the way it can for object instances built using managedcode. .NET applications that need to interact frequently with COM objects are prone to leak memory in often subtle, non-obvious ways. For instance, the CLR garbage collector cannot reclaim an instance of .NET class that references a persistent COM object so long as the COM object itself remains referenced.
In summary, the .NET Framework addresses the key limitations associated with COM development, adopting a programming model that relies on strongly-typed objects. This approach was diametrically opposed to the one promulgated in COM, one based on binding to objects dynamically during run-time. Application programs written in one of the .NET Framework languages like C# or VB.NET, of course, still need to call Win32 services that pass pointers around and use COM objects that need reference counting. .NET classes that wrap frequently accessed COM objects or Win32 methods are very effective in hiding the fact that, under the covers, the Windows run-time still relies heavily on COM.



Read More
Posted in Windows 8 | No comments

Wednesday, 21 November 2012

Plug-and-Play devices on Windows Tablets

Posted on 13:46 by Unknown

In the last post on Windows 8 and the new Windows Runtime libraries for Windows Store apps, I mentioned that the key deliverable in the new version of the Windows OS is the port to the ARM platform. In this post, I will discuss the implications of Windows running on ARM, emphasizing the impact of “plug-and-play” device driver technology. In porting the core of the OS to the ARM platform, Microsoft was careful to preserve the interfaces used by device driver developers, ensuring that there was a smooth transition. Microsoft wanted to allow customers to be able to attach most of the peripherals they use today on a Windows 7 machine to any ARM-based tablet running Windows 8.

What is ARM?

In discussing the Windows 8 port to the ARM platform with some folks, I noticed that not everyone is familiar with the underlying hardware, that it runs a different instruction set than Intel-based computers, that it is not Intel-compatible, etc. So, let’s start with a little bit about the ARM hardware itself.

ARM – the acronym originally stood for Advanced RISC Machine – is a processor architecture specification that is designed by the ARM consortium and then licensed by its members, who then build them. Members of the consortium work together to devise the ARM standard and move it forward. By any measure, in the marketplace today ARM has a reach that is impressive. According to the ARM web site, at least 95% of all mobile phones – not just smartphones – are powered by ARM microprocessors. In 2010, six billion microprocessors based on ARM designs were built. If you own a recent model coffee maker that sports a programmable, electronic interface, you are probably talking to an ARM microprocessor.

So, ARM refers to the processor architecture, an “open” standard of sorts, open, at least, to any hardware manufacturing company willing to pay to license the ARM IP and designs from the consortium – which runs you several million dollars, plus royalties on every unit you build. The ARM processor specification, which is based on RISC principles, is distinct from the manufacturing of ARM chips. Overall, there are currently about 20 manufacturers that build ARM-based computers, with companies like Qualcomm and NVIDIA leading the charge.

Another term associated with devices like the NVIDIA Tegra that powers the Surface is System on a Chip (SoC). In the case of the NVIDIA chip, that entails embedding the ARM microprocessor on a single silicon wafer that contains pretty much everything a mobile computer might need – a graphics processor (NVIDIA’s specialty), audio, video, imaging, etc. Or, if you prefer an integrated SoC design optimized for telephony, you might decide to go with the Qualcomm version. The key is that the software you build for the phone can also run on an ARM tablet because the underlying processor instruction set is compatible.

I blogged last year that ARM technology and the consortium of manufacturers that have adopted ARM designs have emerged as the first credible challenge to the Wintel hegemony that has dominated mainstream computing for the last twenty years. A year later, that prediction looks better and better. From almost every perspective today, ARM looks like it is winning.

ARM’s recent success is reflected in the relative financial results of both Microsoft and Intel, compared to Apple and Qualcomm, for example. Microsoft recently reported revenues slipped by 8% in its latest quarter, while Intel sales were down about 5%. The forecast for PC sales is down, as I mentioned in an earlier post, as more people are opting to buy tablets instead. Meanwhile, Apple posted “disappointing” financial results for the quarter because sales of iPads “only” increased by 26%. Overall, revenue at Apple increased by 27% in its most recent quarterly earnings report. Sales of iPhones were up 58%, compared to last year, with Apple apparently having some difficulty keeping up with the demand.  

All of which makes Windows 8 a very important release for Microsoft. Windows 8 needs to offer a credible alternative to Apple and Android phones and tablets, blunting their drive to dominate this market. It is an open issue whether Windows 8 is good enough to do that. My guess is “yes” for tablets, but “no” for phones. Windows OEMs like Lenovo, HP and Dell are rushing to bring machines that exploit the Windows 8 touch screen interface to market. Microsoft is hoping that’s Windows’ long term policy of being open for all sorts of hardware peripherals – devices that “plug and play” in Windows --  plug into Windows PCs will provide a major advantage in the emerging market for tablets.

Plug and Play devices

As I discussed in the last blog entry, you can buy an ARM-based tablet like the new Microsoft Surface, but it is only capable of running applications built on top of Windows RT. Picture the architecture of Windows 8, for example, which looks like the block diagram in Figure 1:
 
 

Figure 1. The Windows Runtime (aka Windows RT) is a new API layer on top of existing Win32 OS interfaces that developers must target in order to build a Windows Store app that can run on Windows 8 ARM-based tablets, which are limited to supporting Windows RT. As illustrated, a Windows Store app can also call into a limited subset of existing Win32 interfaces that have not been fully converted in Windows 8.
 

The set of OS changes associated with Windows 8 are highlighted in the upper right corner of the block diagram in Figure 1: the new Windows Runtime API layer that was added, spanning a significant subset of the existing Win32 API that Windows applications call into to use OS functions. Examples of Win32 APIs that Windows applications ordinarily need to call include accessing the keyboard, mouse, display, touch screen, operate audio components of the machine, etc. Windows 8 Store apps that can run on ARM processors must limit themselves to calling into the Windows Runtime APIs, except for a small number of selected Win32 APIs, like the COM APIs, that are permitted.
 
Figure 1 is modeled on the diagrams used in chapter 2 in Mark Russinovich’s  most recent Windows Internals book that I have updated to reflect the new Windows Runtime layer. (Windows Internals is essential reading for anyone interested in developing a device driver for Windows, or just wants to understand how this stuff works.) It is a conventional view of how the Windows OS is structured. It shows the core components of the OS, generally associated with the Windows Executive, the OS kernel, and the HAL. The OS kernel, for example, manages process address space, creation threading, and thread dispatching. The OS kernel is also responsible for managing system memory, both physical memory and the virtual memory address space built for each executing process. At the heart of the OS kernel are a set of synchronization primitives that are used to ensure that, for instance, the same block of physical memory is only allocated to one process address space at a time.  
 
Kernel mode is associated with a hardware level that allows privileged mode instruction to be executed. An example of a privileged mode instruction is one that is reserved for the OS to use to switch the processor from executing code inside one thread to code in another. An essential core service of an OS is to function as a traffic cop, managing shared resources such as the machine’s CPUs and its memory on behalf of the consumers – threads and processes, respectively – of those resources.
 
Before moving onto the next set of OS components, I should mention the HAL, or Hardware Abstraction Layer, a unique feature of Windows designed to insulate the rest of the OS from specific processor architecture dependencies. It hides hardware-specific interfaces like the way the processor hardware implements processing of interrupts from attached devices, handles errors like a thread accessing a memory location in a page that doesn’t belong to it, or context switching. These are all functions that processors handle, but different hardware platforms tend to do them in a slightly different manner. Consolidating hardware dependent code that has to be written in the machine’s assembly language in the HAL makes it relatively easy to port Windows to a new processor architecture. To port Windows to the ARM processor, for example, Microsoft first needed to develop a version of the HAL specific to the ARM architecture, and then build a cross-compiler that knows how to translate native C code into valid ARM instructions to generate the rest of the OS. I am making the port to ARM sound a whole lot easier than I am sure it was, but over the years the HAL has enabled Windows to be ported relatively easily to run on a wide range of hardware, including the Digital Alpha, the PowerPC, Intel IA-64 (the Itanium), and the AMD64 (which Intel calls x64).
 
Figure 1 also illustrates the device drivers in Windows. I mentioned that the Microsoft strategy for Windows 8 on tablets is designed to leverage an extensive ecosystem of hardware manufacturers that Microsoft has built over the years because of the ability for anyone to extend the OS by writing a device driver to support a new piece of hardware. Windows “Plug and Play” facilities for attaching devices has grown into a very sophisticated set of services, including ways for device driver software of tapping into Windows Error Reporting, for example.
 
 In general, device drivers are modules that also run in kernel mode and effectively serve as extensions to the OS. Their main purpose is managing hardware resources other than the CPU and memory. Windows device drivers are installed to manage any and all of the following devices:
  • Disks, CD, and DVD players/recorders that are attached using IDE, SCSI, SATA, or Fibre Channel adaptors
  • the network interface adaptors, both wired and wireless,
  • input devices such as the mouse, the keyboard, the touch screen, the video camera, and the microphone(s)
  • graphical output devices such as the video monitor
  • audio devices for sound output,
  • memory cards , thumb drives,
as well as pretty much any device that plugs into a USB port on your machine. In Windows 8, the list of device drivers expands to include a GPS and an Accelerometer.
 
 Windows currently provides an open “Plug-and-Play” model that permits virtually anyone to develop and install a device driver that extends the operating system. Figure 2 is a screen shot from a portable PC of mine showing the Device Manager applet in the Control Panel that tells you what Plug-and-Play hardware – and the device driver associated with that hardware – is installed. As you can see, it is quite a long list. This flexibility of the Windows platform is a major virtue.


Figure 2. The Device Manager applet in the Control Panel tells you what Plug-and-Play hardware is installed, along with information about the device driver software associated with that hardware.

For the sake of security, you want to ensure that any OS function that doesn’t absolutely need to run in kernel doesn’t. But, by their very nature, because they need to deal directly with hardware device dependencies, device drivers need to run in kernel mode. Device drivers in Windows don’t actually interface with the hardware directly – they use services from the HAL and the Windows IO Manager to do that. This mechanism allows device drivers to be written so that they can be portable across hardware platforms, too.  The importance of this is that, once Windows is ported to ARM-based SoC machines, you ought to be able to plug in virtually any device that you could into an Intel-architecture PC and it will run.
 
As a practical matter, Windows has a device driver certification process that the major manufacturers of peripheral hardware use. So, not every piece of hardware you can attach to a Windows 7 PC, like the one illustrated in Figure 2, will have immediate support for the Windows RT environment on ARM. Microsoft also wants hardware manufacturers to take the extra step of packaging their drivers into Windows Store apps.
 
The open, plug-and-play device driver model Windows uses permits an almost unlimited variety of device peripherals to be plugged in and extend your Windows machine. Consider printer drivers in Windows. Manufacturers like HP have developed very elaborate printer drivers that let you know when you’ve run out and ink and then try to nudge you into buying expensive ink cartridges from them online. In contrast, try to print a document using your iPad. Can’t do it, no device drivers.
 
This great virtue of the Windows OS can also be a curse. The disadvantage of the “open” model is that it is open to anyone to plug into and start running code with kernel mode privileges. Historically, whenever your program needed a function that required kernel mode privileges, you could develop a device driver module (a .sys module) and drop that into the OS, too.
 
Being open leads to problems with drivers that are less than stellar quality and also creates a potential security exposure. The fact is 3rdparty device driver code, running in kernel mode, is also a major source of the problems that all too frequently cause Windows to hang, crash, or blue screen. It is often not Microsoft code that fails, so there isn’t much Microsoft can do about this – other than take the steps they already have, like the certification program, to try to improve the quality of 3rd party driver software. The fact that my device driver can be deployed on machines configured with such a wide variety of other hardware that my software may need to interact with greatly complicates the development and testing process. The diversity leads to complexity and that directly impacts the quality of the software. Bugs inevitably arise whenever my software encounters some new and unexpected set of circumstances.
 

Both a blessing and a curse

 
A good way to illustrate the advantages and the disadvantages of the Windows open hardware policy is to look at graphics cards for video monitors. The lightweight portable PC I am typing on at the moment has a 14” display, powered by a graphics chip made by Intel that is integrated on the motherboard. When I use this portable PC at my desk, I slide it into a docking station where two additional video monitors are attached, powered by a separate, higher end NVIDIA graphics card. (The docking station actually supports up to four external monitors, but I am pretty much out of desk space the way things are at the moment, so I will have to get back to you on that.)
 
One of the external flat panel displays is 1920 x 1200, the other is only 1920 x 1080. I have one positioned on the left of the portable and the other on the right. In addition, I have a 3rd party port replicator plugged into a USB port on the back of the PC. This device has additional video ports that I am currently not either. If you look at the Screen resolution applet in the Control Panel, my configuration looks like I have four video monitors available, not three.

See the screen shot in Figure 3.
 
Figure 3. The Screen Resolution on my portable PC when I plug into a docking station with additional video monitors attached. It shows four video monitors are attached, when physically, there are only three. The 4th is a phantom device that is detected on an additional port replicator (attached via a USB port) that supports additional video connections.


This desktop configuration has multiple external monitors augmenting the built-in portable display, which is “only” 1600 x 900. When you are doing software development, take my word for it, it helps to have as much screen real estate as possible. Visual Studio also has pretty good support for multiple monitors, and I have really come to rely on this feature. When coding or debugging, I can have multiple windows displaying code inside the VS Editor open and arrayed across these monitors at any one time. Having multiple monitors is a tremendous aid to developer productivity. One reason I purchased this portable PC was that it was lightweight for when I need to pack it and go. But, in fact, the primary reason I purchased this specific model was it came with the high end NVIDIA graphics adaptor so I could plug in two or more externals monitors when I am using at my desk.
 

I am very satisfied with the graphics configuration I have, but it is not exactly trouble-free, and I have had to learn to live with a few annoying glitches. For instance, when I swing the mouse across an arc from the monitor on the left to the monitor on the right, Windows will let the mouse go off the deep end & enter the “display” of the phantom 4th monitor where I can no longer see where it is. When I first drag a Visual Studio panel or window onto either one of the external monitors, there is evidently a bug in the graphics card adaptor code that stripes solid black rectangles across portions of the window. This bug is apparently WPF-related, because it doesn’t show up on any standard Windows Forms applications, like Office or IE. (One of the features of Windows Presentation Foundation is that provides direct access to high resolution rendering services on the graphics card, and this is supposed to be a good thing. For one thing, these higher end graphics cards are like high speed supercomputers when it comes to vector processing.) Fortunately, re-sizing the window immediately corrects the problem, so I have learned to live with that minor annoyance, too.
 
Occasionally, the graphics card has a hiccup, the screens all black out, and I have to wait a few seconds while the graphics card recovers and re-paints all the screens. Very infrequently, the graphics card does not recover; there is a blue screen of death that Windows 7 hides, and a re-boot.
 
Overall, as I said, I am pretty happy with this configuration, but it is certainly not free of minor glitches and occasionally succumbs to a major one. Understanding that my particular configuration of PC, its graphics adaptors, the docking stations, and the characteristics of the external monitors is singularly unique, I am resigned to the fact that NVIDIA is unlikely to ever fix my peculiar set of problems.
 
Windows has a remarkable automated problem reporting system that will go out on the web following a graphics card meltdown and try to match the “signature” of my catastrophic error to the fixes NVIDIA has made available recently to its “latest and greatest” version of its driver code to see if there is a solution to my problem that I can download and install. But, realistically, I don’t expect to ever see a fix for this set of problems. They are associated with a combination of hardware and software (adding Visual Studio’s use of WPF to the mix) that, if not exactly unique, is still pretty rare. Inside NVIDIA, any developer working to fix this set of bug reports would have difficulty reproducing them because their configurations won’t match mine. That, and the fact that there aren’t too many other customers reporting similar problems – again, because of the unique environment – means the bug report will be consigned to  a low priority, “No-Repro” bin where no one will everwork on it.
 
There is another way to go about this, which is Apple’s closed model. On Apple computers and devices, with few exceptions, the only peripherals that can be attached to a Mac are those branded by Apple and supported by device drivers that Apple itself supplies. To be fair, Apple is more open than it used to be. Since Apple switched over to Intel processors, the company has opened up the OS a little to 3rd party hardware, but it has not opened it up a whole lot. I can buy a MacBook Pro, for example, which is equipped with a middling NVIDIA graphics card and attach an external Apple Thunderbolt 27” display to it. The Thunderbolt is a beautiful video display, mind you, 2560 by 1600 pixels, but it costs $900. I can’t configure a 2nd external monitor without moving to one of the Apple desktop models.
 
However, and this is the key take-away from this rambling discussion, limiting the kind of monitors and the array of video configurations that the MacBook can support does lead to standardized configurations that Apple can insure are rigorously tested. And, this leads to extremely high quality, which means customers running an iMac do not have endure the kinds of glitches and hiccups that Windows customers grow accustomed to. On Windows, there is support for a significantly broader array of configuration options, but Microsoft cannot deliver quite the same level of uniformly high quality to that support. Using its open model that permits virtually any third party hardware manufacturer to plug their device into Windows effectively means that Microsoft has farmed out some of the most rigorous requirements for quality control in Windows to third parties.

Open vs. Closed hardware models

The flexibility of the open model used in Windows certainly has its virtues, as I have discussed. It makes good business sense for Microsoft executives to try to take advantage of the flexibility of the Windows platform and leverage the range and types of hardware that Windows can support, compared to an Apple PC or tablet in its Windows 8 challenge to the iPad.
 
The Windows organization in Microsoft is certainly aware that the high level of quality control that Apple maintains by restricting the options available to the consumer can be a significant, strategic advantage. Each release of Windows features improvements to the device driver development process to help 3rd party developers. The Windows organization performs extensive testing using popular 3rd party hardware and software in its own labs. Microsoft also provides most of the driver software you need in Windows when you first install it. However, a good deal of this responsibility for quality is farmed out to its OEM customers – the PC manufacturers – who need to ensure you have up-to-date video drivers and other drivers for the specific hardware they include in the box.
 
Microsoft has also made an enormous investment in automated error reporting and fix tracking associated with the Windows Update facility, which is very impressive. IT organizations often disable Windows Update because they fear the unknown, but its capabilities are actually quite remarkable. (There is a good description of Windows automated error reporting and the Windows Update facility in an article published last year in the Communicationsof the ACM.) Windows gives third parties access to its bug databases, and the Windows organization will proactively pursue getting a fix out to third party software, if it is affecting an appreciable number of customers. A staggering number of customers run Windows, however, well over a billion licensed copies exist, so that still leaves customers like me with relatively minor glitches associated with relatively unusual configurations with little hope of relief. I am not saying it is impossible that I will ever see a version of the NVIDIA driver that fixes the problems I experience, but I am not holding my breath.
 
 Battery life on portables is a good example where, despite considerable efforts from Microsoft to support the device driver community, Apple has a distinct technical advantage. Now that Macs are running the same Intel hardware as Windows PCs, Apple hardware has no inherent advantage when it comes to battery life. Running on similar sets of hardware, Apple machines typically run about 25% longer on the same battery charge. Most of this advantage is due to the control that Apple exercises over all aspects of the quality of the OS, the hardware, and the hardware driver software that it delivers. (Some of it is due to shortcomings in Windows software, specifically system and driver routines that wake up periodically from time to time to look around for work. One of the culprits is the CPU accounting routine that wakes up 64 times a second to sample the state of the processor. Hopefully, this behavior has been has been removed in Windows 8, but I suspect it hasn’t.) In contrast, Microsoft has to periodically orchestrate battery life-saving initiatives across a broad range of 3rdparty device driver developers, which is akin to herding cats.
 
Microsoft’s decision to build and distribute its own branded tablet, the new Surface, does reflect an understanding at the highest levels of the company that the Apple products that Microsoft must compete with have a distinct edge in quality, compared to the products from many of its major Windows OEM suppliers. I have heard Steve Ballmer in department-level meetings discuss his reluctance to abandon the “open” and cooperative business model that has served the company so well for so long. It is a business model that definitely leads to a more choice among products across the OEM suppliers and lower prices to consumers because of the competition among those suppliers.
 
 
lt is also a business model that has forced Microsoft’s Windows OEM customers to live for years with meager profit margins in a cutthroat business, high volume, low margin, capital-intensive, with little room for error. Meanwhile, Microsoft has consistently raked in most of the cream right off the top of that market in software license fees for Windows and Office that it collects directly from those same OEMs. Microsoft’s high-handed behavior led IBM to exit the PC hardware market long ago. HP, which has struggled for years to make a profit in the same line of business, would also like to exit the business, but its management still has the albatross of the Compaq acquisition around its neck, constraining its ability to shed an asset that cost the company dearly to acquire. The problems Microsoft’s OEM partners face are obvious – an Intel “Ultrabook” configured as an iMac Air runs $1200 this Christmas, while identical hardware from HP that runs Windows retails for 40% less. The margins Apple is able to command for its hardware products are the envy of the tech industry.
 
By getting its support for tablets into consumer-oriented sales channels in time for the Christmas rush, Microsoft is hoping Win 8 can make a dent in the huge lead Apple has fashioned in the emerging market for tablets. Meanwhile, at least in the short term, sales of the new Microsoft Surface are going to be restricted to Microsoft’s direct sales outlet, currently numbering only about 60 stores. (Plus, you can order it direct from the Microsoft Store over the web. Currently, Microsoft is forecasting about a 3-week delay before it can ship you one.) With Windows OEMs primarily pushing a variety of Windows 8 machines running AMD and Intel processors, Christmas shoppers are bound to be to be confused with all the choices available: AMD vs. Intel, Intel iCore vs Intel Pentium, and the Microsoft Surface on ARM. It is all a little overwhelming to the average consumer and just wants something little Timmy can use for school.
 
 

Back to the future

 
All of which brings us full circle back to Windows RT because the new Surface tablet can only run applications that use Windows RT. In brief, Windows RT is a new API layer in the OS that ships with every version of Windows 8, including Windows Server 2012. (“RT” stands for “run-time.”) If you buy one of the new ARM-based tablets (or phones when Windows 8 phones start to ship), these devices come with RT installed, omitting many of the older pieces of Windows that Microsoft figures you won’t ever need on a tablet or a phone.
 
As Figure 1 illustrates, this new API layer sits atop the existing Win32 APIs, which I have heard Windows developers discuss consists of some 300,000 different methods. As illustrated, Window RT does not come close to encompassing the full range of OS and related services that are available to the Windows developer. Microsoft understood that it could not attempt to re-write 300,000 methods in the scope of a single release, so Windows RT should be considered a work in progress. What Microsoft tried to accomplish for Windows 8 was to provide enough coverage with the first release of Win RT that developers would be capable of quickly producing the kinds of apps that have proved popular on the iPhone and the iPad. As shown in the drawing, Windows Store apps also can make certain specific Win32 API calls that were not fully retrofitted into the new Windows Runtime.

Summing up.

 
In general, I am certain that porting the Windows OS to the ARM platform for Windows 8 was an excellent decision that should breathe some new life into the Microsoft PC business. ARM processors have evolved into extremely powerful computing devices – quad-core is already here & 64-bit ARM is on the way, for example. Portable, touch-screen tablets are a very desirable form factor. I have never seen a happier bunch of computer users than iPhone stalwarts chatting up Siri. Windows needed to try to catch up and perhaps even leap frog Apple before its lead in portable computing became insurmountable.
 
When Windows 8 was in the planning stages, the Windows Phone OS, which was adapted from Windows CE, was already running on ARM. At the time, there were at least two other major R&D efforts inside Microsoft that were also targeting the ARM platform. The Windows organization, led by Steve Sinofsky, effectively steamrollered those competing visions of the future of the OS when it started to build Windows 8 in earnest. And, for the record, I don’t have a problem with Sinofsky’s autocratic approach to crafting software. Design by committee slowly and inevitably takes its toll, weakening the power and scope of a truly visionary architect’s design breakthrough.
 
One of the crucial areas to watch as Windows 8 takes hold and Microsoft begins development of the next version of Windows is whether or not Windows on devices can keep up with rapidly evolving hardware. Microsoft needs to figure out how to rev Windows on devices much more frequently than it does the rest of the OS. That will be an interesting challenge for an extremely complicated piece of software that needs to support such a wide range of computers, from handhelds to rack-mounted, multi-core blade servers.
 
As delivered, I also believe the vision for Windows 8 suffers from serious flaws. The most noticeable one is the decision to make the new touch screen-oriented UI primary even on machines that don’t have touch-enabled screens. This “one size fits all” strategy condemns many, many Windows customers to struggle to adapt to an inappropriate user interface.
 
Moreover, from the standpoint of a Windows application developer, I am less than enamored with some of the architectural decisions associated with the new Windows Runtime API. These were based on a profound misunderstanding inside the Windows organization about why software developers chose to target Windows development in the first place (going back 20 years or so in the life of the company) and why these same developers are targeting Apple iPhones and iPads today.
 
I will defer the bulk of that discussion to the next blog entry on Windows 8.

Read More
Posted in Windows 8 | No comments

Wednesday, 14 November 2012

Is there an ARM-based PC in your future?

Posted on 14:24 by Unknown
In the previous blog post in this series on Windows 8, I explained that Windows RT is a new application run-time layer in Windows that was built when the Windows OS was ported to the ARM architecture. ARM is the dominant processor architecture used in current smartphones and tablets, including the Apple iPhone and iPad. So, the short answer to the question posed by the title is, “You already do run an ARM-based computer, and it is the smartphone in your pocket.” The problem for Microsoft is that this ARM computer is probably not running an OS based on Windows.

 Microsoft’s new Surface tablet, designed to showcase the capabilities of Windows 8, uses an ARM processor. On a Surface, you can only run applications known as Windows Store apps that are specifically built to run on top of Windows RT. You can also install and run Windows 8 on any Intel-compatible 32 or 64-bit compatible processor. The Intel version of Windows 8 is called Windows 8 Pro. Windows 8 Pro includes the new Windows RT application run-time, so Windows 8 App Store apps will run on Windows 8 Pro machines. Windows 8 Pro also includes all the older parts of Windows 7, so it is also capable of running “legacy” Windows desktop applications.

If you are a software developer trying to build one of the new Windows Store apps, first you have to install the latest version of Visual Studio and then create a project that targets Windows 8 App Store apps. To maintain compatibility across hardware platforms, a Windows Store app can only access functions in the new Windows Runtime, with the exception of some essential Win32 functions that were converted to run on an ARM processor, but were not included in the new Runtime. For security reasons, Windows Store Apps are run in a silo that limits their ability to interact with the underlying operating system or access any other running process. Microsoft has a certification process that guarantees that the app conforms to these requirements before it is made available on the Windows Store web site. (Apple has a similar policy with its App store.)

The new Runtime layer is quite extensive: see http://msdn.microsoft.com/en-us/library/windows/apps/br211377to get a sense of its scope. Applications written in either C++ or Javascript can call into the Runtime directly. The .NET Framework version 4.5 contains some glue to allow Windows Store apps to be written in C#, Visual Basic.NET or any of the other .NET-compatible languages.

Among the essential Win32 functions that are available under ARM are those associated with COM, a key technology used for years and years in Windows to package code into run-time components. In Windows programming, COM interfaces are frequently used to communicate between threads and processes. Many, many Win32 functions rely on COM interfaces. Win32 functions that were migrated to the new Runtime required a wrapper to hide the COM interface from the Windows 8 App Store app, but COM is still there under the covers. The complete COM infrastructure was ported to ARM for Windows 8, but the interfaces themselves were not re-written. If you access the “Win32 and COM for Windows Store apps” Help topic for Windows 8 developers at http://msdn.microsoft.com/en-us/library/windows/apps/br205762.aspx, you can see that COM is included in the Win32 subset that was ported to ARM. Drilling a little deeper, you can see that, for example, your Windows Store app can still call CoInitializeEx()to initialize a COM component, just like in the days of old.

So, while Windows 8 apps can call directly into the full set of Win32-based COM APIs, there are some very interesting omissions in the RT API surface. Performance monitoring is one of those omissions. Because the Win32-based performance monitoring interfaces were not ported to ARM, a Windows 8 app cannot access the performance counters associated with CPU accounting, for example, and determine how much CPU time it is consuming.

(Note: there is a workaround available. Any app on RT can still make a call directly into kernel32.dll and pull CPU consumption at the process level from it. You can use this hack while you are developing the app, but you must remove that PInvoke from the finished app before you submit it for certification, according to this Q&A article posted at http://stackoverflow.com/questions/12338953/is-there-any-way-for-a-winrt-app-to-measure-its-own-cpu-usage?lq=1. Microsoft won’t permit a retail version of a Windows Store app to call into kernel32.dll directly.)

Instead of relying on performance counters, however, your Windows Store app can utilize ETW. The Win32 APIs that are used to generate ETW trace events or Listen to them from inside your app are part of the Win32 subset that Windows Store apps can call into on ARM. (See http://msdn.microsoft.com/en-us/library/windows/apps/br205755.aspx.) The fact that ETW is fully supported on ARM while performance counters are not, by the way, is, further evidence that the counter technology is on the wane and tracing is ascendant in Windows.

It is not entirely clear why performance monitoring was omitted from Windows RT. One possibility is that the application model for Windows Store apps is very different. When you run one of these Apps, it takes over the entire display. When you switch to a different app, the first app is suspended, and it is supposed to dispose of any objects it is currently holding.
 
Still, if you are a game designer trying to support this new class of Windows devices, leaving out the performance monitoring capabilities is worrisome. Physical memory on the Surface is limited to 2 GB, so RAM is decidedly a constraint. The Surface uses a 4-way ARM multiprocessor, running at 1.3 GHz, so RT does support multi-threading. In fact, the RT support for multi-threading is modeled on the task.await()pattern of asynchronous programming introduced recently in the .NET Framework.

To reiterate, the key deliverable in the new OS is the port to the ARM platform. I will drill into ARM and its implications for the OS in the next post in this series.

Read More
Posted in Windows 8 | No comments

Monday, 29 October 2012

Is Windows RT in your future?

Posted on 12:21 by Unknown
I am writing this in the wake of the Windows 8 launch on October 26. It continues a blog entry I posted last week that discusses some of my early experience running and testing the new Windows 8 release. I want to focus here on discussing what Windows RT is, which seems to be generating a good deal of confusion. That is probably because Microsoft has not done great job in explaining what exactly Windows RT is. Windows RT itself is not as complete and as fully realized as it should be, and that, of course, is another source of some of the confusion.

If you go out to the Microsoft Store, you will see this description of Windows RT:

“Windows RT is a new version of Microsoft Windows that's built to run on ARM-based tablets and PCs. It works exclusively with apps available in the Windows Store.

Windows 8 Pro runs current Windows 7 desktop applications. It can also use the programs and apps available in the Windows Store.”

That first sentence is confusing because Windows RT is part of Windows 8, wherever it runs, including Windows Server 2012, if you decide to install the GUI. Windows 8 Pro can run “programs and apps available in the Windows Store” because Windows RT is in there, too. Win RT is part of Windows 8. Windows 8 Pro runs on Intel-based machines and includes all the pieces of Windows that provide backward compatibility with Windows 7 and Windows 7 applications.

In brief, Windows RT is a new API layer in the OS that ships with every version of Windows 8, including Windows Server 2012. “RT” stands for “Run-Time.” For the new Windows Store Apps, Windows RT encompasses the set of OS services that one of these Apps can expect to have always available. If you buy one of the new ARM-based tablets (or phones when Windows 8 phones start to ship), these devices come with RT installed, omitting many of the older pieces of Windows that Microsoft figures you won’t ever need on App designed specifically to run on a tablet or a phone. It is not going to have enough screen real estate where it would make sense to generate lots of child windows and support an MDI interface, for example. But it may want to access to high-res graphics or play an audio file or an HD video. Windows RT encompasses those kinds of services.

As Figure 1 illustrates, this new API layer sits atop the existing Win32 APIs, which I have heard Windows developers say encompasses some 300,000 different methods. As illustrated, Window RT does not come close to wrapping the full range of the OS and related services that are available to the Windows developer. Microsoft understood that it could not attempt to re-write 300,000 methods in the scope of a single release, so Windows RT should be considered a work in progress. What Microsoft tried to accomplish for Windows 8 was to provide enough coverage with the first release of Win RT that developers would be capable of quickly producing the kinds of apps that have proved popular on the iPhone and the iPad. If you think about what an app that streams movies and TV shows from NetFlix on a Win 8 ARM-based tablet, the APIs to accomplish that should all be in Win RT.

 

Figure 1. Windows RT is a new API layer in Windows 8, not a separate version of the operating system, as the company’s marketing literature implies. The Win RT API covers only a small portion of the surface area of the existing Win32 APIs, targeting only the APIs that are needed to build a new Windows 8 App Store app.

 
The implication is that WinRT, and all of its dependencies inside the OS, were ported to the ARM. But not everything that is in the full Intel-based OS is running on the ARM – yet. No doubt, the bloated OS needed to shed some pounds in the process.
 
 Microsoft has documented some additional Win32 APIs that a Windows Store App running on ARM can use. See http://msdn.microsoft.com/en-US/library/windows/apps/br205757. In the Diagnostics area, for example, my area of expertise, Windows Store Apps can call the Win32 APIs associated with ETW. The ETW trace infrastructure used for diagnosing Windows performance problems was ported to ARM. COM was ported, too, for example. Many existing OS APIs use COM interfaces; Win RT APIs supersede many of these COM interfaces, but the existing COM interfaces are, no doubt, still being used under the covers.
 
 Because the software I work on is aimed at performance monitoring across large scale Windows Server infrastructures, I had not been paying that much attention to the details about what Microsoft was planning to deliver for Windows 8 until it was ready to ship. When the final version of Windows 8 did become available to developers back in August, my immediate concern was to determine if our software was compatible with it.
From what I have seen so far, I don’t expect that Win 8 is not going to have a big impact on Demand Tech’s software, which initially is a big sign of relief. The Win 8 release is oriented around tablet PCs and phones. If you don’t have a new touch screen PC, you are probably better off staying with Windows 7 until you do. I don’t see the web portal, for example, that we build that provides data-rich visualizations of detailed Windows performance data being something you’d run on a device the size of a phone. The reporting and charting apps are http-based, though, so if you have access to a browser, you can run the app. If you want, you can access the app on an iPad today.
 
The desktop components we have developed in the comnpany recently all use the Windows Presentation Foundation, or WPF, and Windows RT will not support WPF. That is a concern. Microsoft is no longer investing in WPF, so we need to look at Silverlight instead in the future, which overlaps reasonably well with the parts of WPF we are using today. It is hard to say what the future holds for Silverlight, though. Many Windows developers of desktop application have similar concerns. This spring I prototyped a new desktop app that was designed to run using multiple windows that could be arranged across multiple monitors using WPF. Frankly, I am not sure what to do as I look at the next steps getting that app ready for prime time. It is data-rich analysis application that benefits from having access to lots of screen real estate. I just wrote it, I don’t relish having to convert to Silverlight or anything else.
 
Back in 2010 when I was still at Microsoft, I attended a number of planning sessions that involved senior developers from Windows and the Developer Division, where I worked. So I was privy to a lot of the behind-the-scenes discussions going on in the early & middle stages of Win 8 development. At these briefings, the Windows organization also presented a great deal of the market research that supported the planning decisions they made. They talked about the competition from Apple, particularly with the iPhone because the iPad was still pretty new and exotic. They talked about pressure from their major OEM customers to get support for tablets out there quickly before Apple gained an insurmountable lead.
 
Inside the Developer Division, we were charged with delivering the next version of Visual Studio timed to ship when Windows 8 would ship and support Win 8 application development. The Windows 8 briefings I attended concentrated on those aspects of the Windows 8 development plan that were going to impact what needed to be done to support Windows 8 developers. Windows RT was one of the prime topics discussed because of its impact on the Developer Division’s workload. The Windows development organization under Steven Sinofsky’s leadership is very disciplined, and what they hoped to accomplish with Windows RT was spelled out very clearly.
 
Sinofsky is rapidly becoming the new public face of Microsoft. (See http://blogs.wsj.com/cio/2012/10/25/windows-8-launch-targets-consumers-not-business/?mod=wsjcio_hps_cioreportfor a photo of Sinofsky skateboarding using a new Surface with wheels attached to demonstrate its tensile strength. A much more interesting demo would be to show if it is capable of surviving being dropped on a concrete surface from a height of, say, ten feet. Many more customers will drop these devices from the height of their kitchen counters than will be skateboarding on them.) Sinofsky is in line to become the next CEO at Microsoft, when Steve Ballmer steps aside.
 
What was less clear was how much Windows would be able to actually accomplish in the time allotted for Win 8 development. Sinofsky’s organization is also very disciplined about cutting any feature that is not going to fit within the scope of the release or might jeopardize the product’s target delivery date. Windows 8 was going to ship in 2012 in time for Windows OEMs to get new tablets and touch screen PCs out in time for Christmas. Slipping the target delivery date was never a possibility.
 
Meanwhile, I am removed from Microsoft for almost two years now. Now that Windows 8 is shipping and all the documentation for Windows 8 developers is in place, it is relatively easy for me to assess how much of the plan the Windows development org was able to deliver.
 
In the next blog entry, I will go into more detail about what Windows RT is, what is in it, and, also, what it doesn’t cover. But the short answer to the question, "Is Windows RT in your future?" is, "Yes." Win RT is a new application run-time layer built into all new versions of the Windows OS.

Read More
Posted in Windows 8 | No comments

Monday, 22 October 2012

An early look at Windows 8 and Server 2012

Posted on 01:38 by Unknown
Windows 8 is about to become widely available to great fanfare, while Windows Server 2012 was quietly released recently “into the wild,” which is the discouraging way many Microsoft product development teams characterize the real world environments where their products run. Third-party developers have had access to the final RTM (Release-to-Manufacturing) versions of the Windows 8 release for several months now.

Here at DemandTech, we have been testing Windows 8 and Windows Server 2012 in the lab, making sure our software is compatible, etc. We have been running Windows 8 on virtual machines exclusively, not the dedicated hardware (tablets, mainy) it was designed to showcase.

The most noticeable change in Win 8 is the new UI. There is also a new kind of Win 8 app that when it runs, takes over the entire screen. Under the covers, the Windows OS is still multithreaded, but the interaction model is that, in any of the new apps, you are only working on one thing at a time. It is like -- there aren't "windows" anymore. The new apps run full screen, maximized all the time, although you can also snap them to a portion of the screen, once you get the hang of that gesture.

I happen to have a leg up on the new UI mainly because I’ve been using a Windows Phone, which Microsoft was nice enough to buy me when I was still working there back in 2010, and the UI is similar. On a device the size of a phone, the display is too small to view multiple application windows anyway – there just isn’t enough screen real estate to do that. On a traditional desktop machine, the new UI takes some getting used to, but on a phone or a tablet, it all makes sense.
The Windows 7 Phone actually has quite a good UI. My phone is one made by Samsung. I like the UI much better than the Android phone it replaced. The e-mail is excellent – well, it’s effectively Outlook for the Phone, so there is consistency across my devices, and it is synchronized with Exchange mail, calendar, and contacts. The word recognition app when you are typing -- a text message, e-mail, or even a web search – is also very well executed. I had two earlier versions of Windows Phone, and this app is a distinct improvement, and miles better than a similar app on my Android phone. Getting a UI right is a lot of trial and error, and the Windows 7 Phone word recognition app was clearly improved from earlier versions.

The Windows 8 UI is very similar to Windows Phone 7.x, featuring the same large buttons that are like movable tiles on your primary screen. These tiles are big, and their size makes it easy to click on one, using a touch screen. Moving to a tablet that offers me the same functions and features as my old phone, but in a larger, but still convenient, format is an attracxtive option to me. I haven't ordered my Microsoft Surface yet (see below), but there is definitely a device like that in my future.



In Windows 8, this UI that was originally developed for Windows Phone version 7 – which very few people bought – is front and center, the foundation for supporting Windows 8 tablets and phones. The trade-off here is that a touch screen UI that is designed for a tablet PC is not necessarily the best choice for a machine that you access using a mouse and keyboard. In Windows 8, you can get to the traditional Windows desktop easily enough, run all your older apps, and navigate between them the way you are used to working. But the “Start” button is gone, to start a new app you have to revert to the new UI, and you find yourself switching back and forth a bit between the two “desktops” a little more than is probably optimal.

It takes a little getting used to. Reference Paul Allen’s blog for a better and more complete discussion of the new UI, based on his having an opprtunity to use it on one of the new touch screen tablet PCs.

Undoubtedly, Windows 8 is an important release to Microsoft. According to the market research I’ve looked at, the number of smartphones sold will exceed the number of PCs sold for the first time in 2012. Unfortunately, for Microsoft, only about 3% of the smartphones sold so far this year worldwide are Windows Phones. However, even more menacing to Microsoft’s software business over the long haul, the forecast is for PC sales to level off, as more capable tablets start to supplant portable PCs, IMHO, portable tablets replacing portable PCs is something that is as inevitable as portable PCs supplanting desktop models once they became capable enough. Tablets are not quite there yet, but it is still early in the evolution of the hardware.

Apple, with its iPhone, IPad, and even its iMac Intel-based PCs, is starting to penetrate Microsoft’s core business in desktop OS software, which would also put Microsoft's Office franchise in some distinct peril. This is the first time since Windows emerged as the leading PC desktop OS in the late 1980s that there is a credible threat to Microsoft’s market dominance. Meanwhile, Apple has attracted software developers in droves to its iPhone-iPad platform, lured by the sheer number of devices that are in the hands of consumers, the potential size of the next "killer app" such as Angry Birds or Words with Friends.

The major Windows 8 UI changes were designed for tablets, an emerging market where Apple and Google Android devices have a head start, but hardly an insurmountable lead. Windows-based tablets have the potential to impact the emerging market for lightweight tablets. This is mainly due to the capability of current hardware. Tablet PCs can be configured as follows:
  • a powerful, multi-core processor,
  • can be docked to gain access to a wide variety of peripherals,
  • run for at least 8 hours on a single charge,
  • have a form factor that is approximately 8.5” x 11”, which means they slip easily into a briefcase or purse,
  • weighs only about 1.5 lbs, and
  • features a touch screen.
All in all, a very attractive alternative to portable PCs.

The scenario I envision is I plug my Windows tablet into a dock at the office, run Microsoft Office apps (or even Visual Studio) with an attached keyboard, mouse, plug in an extra video monitor (or maybe even two J), and do everything I can do today on my portable PC. At the end of the work day, I hit a key to save my work in progress incrementally to cloud storage, slide my tablet into my backpack and go. At home, I could duplicate my office setup, or just use the tablet “naked” to access e-mail, Skype, or the web. This is Microsoft’s vision for Windows 8, and it is a tempting one.

The reason that Microsoft is in a position to deliver on this vision of mobile computing without compromise comes down to two words – device drivers. Basically, Windows 8 enables connecting any input device – keyboard, mouse, Kinect, Bluetooth headset, graphics drawing pad, docking station, external monitor, camera, speakers, you name it – to your Windows 8 tablet PC. You should be able to plug in any device with a Windows 7 device driver, assuming you have the right plug. There are no breaking changes to the OS device driver model in Windows 8. (There is some minor new stuff necessary to re-package your driver software so that it is compatible with the new App Store.) There will be Windows 8 tablets available on Day One with USB ports.

Where tablets are today, this is big. If you look at the current tablets that are available, including  Apple’s iPad and a variety of devices that run Google’s Android OS (which was derived from Linux), including the Amazon Kindle, they all have limited capabilities. These are fun devices, but for people that use their PCs in their work – content creators of all kinds – these devices cannot substitute for their portable PCs. Just like me, these folks are good prospects for buying a Windows 8 tablet.
The new tablets will use ARM processors, as well as Intel comnpatible CPUs. One of the things that I will talk about it in the next blog entry is the ARM support in Win 8. On the Windows Server 2012 side, what's happening with virtualization and with NUMA support is also quite interesting.

Of course, I also want to talk about some of the new performance counters that are available. And I don't know if I can resist saying at least something about the new runtime, Windows RT, & what it means for the future of .NET development.
More on those topics next time.
Read More
Posted in Windows 8 | No comments
Newer Posts Older Posts Home
Subscribe to: Comments (Atom)

Popular Posts

  • Using QueryThreadCycleTime to access CPU execution timing
    As a prelude to a discussion of the Scenario instrumentation library, I mentioned in the previous post that a good understanding of the cloc...
  • Using xperf to analyze CSwitch events
    Continuing the discussion from the previous blog entry on event-driven approaches to measuring CPU utilization in Windows ... Last time arou...
  • Virtual memory management in VMware: memory ballooning
    This is a continuation of a series of blog posts on VMware memory management. The previous post in the series is  here . Ballooning Ballooni...
  • Correcting the Process level measurements of CPU time for Windows guest machines running under VMware ESX
    Recently, I have been writing about how Windows guest machine performance counters are affected by running in a virtual environment, includi...
  • Virtual memory management in VMware: Swapping
    This is a continuation of a series of blog posts on VMware memory management. The previous post in the series is  here . Swapping VMware has...
  • Deconstructing disk performance rules: final thoughts
    To summarize the discussion so far: While my experience with rule-based approaches to computer performance leads me to be very skeptical of ...
  • Rules in PAL: the Performance Analysis of Logs tool
    In spite of their limitations, some of which were discussed in an earlier blog entry , rule-based bromides for automating computer performan...
  • Measuring application response time using the Scenario instrumentation library.
    This blog post describes the Scenario instrumentation library, a simple but useful tool for generating response time measurements from insi...
  • High Resolution Clocks and Timers for Performance Measurement in Windows.
    Within the discipline of software performance engineering (SPE), application response time monitoring refers to the capability of instrument...
  • Page Load Time and the YSlow scalability model of web application performance
    This is the first of a new series of blog posts where I intend to drill into an example of a scalability model that has been particularly in...

Categories

  • artificial intelligence; automated decision-making;
  • artificial intelligence; automated decision-making; Watson; Jeopardy
  • hardware performance; ARM
  • Innovation; History of the Internet
  • memory management
  • VMware
  • Windows
  • Windows 8
  • windows-performance; application-responsiveness; application-scalability; software-performance-engineering
  • windows-performance; context switches; application-responsiveness; application-scalability; software-performance-engineering

Blog Archive

  • ►  2013 (14)
    • ►  November (1)
    • ►  October (1)
    • ►  September (1)
    • ►  July (3)
    • ►  June (5)
    • ►  May (1)
    • ►  February (1)
    • ►  January (1)
  • ▼  2012 (11)
    • ▼  December (1)
      • Inside the Windows 8 Runtime, Part 1
    • ►  November (2)
      • Plug-and-Play devices on Windows Tablets
      • Is there an ARM-based PC in your future?
    • ►  October (2)
      • Is Windows RT in your future?
      • An early look at Windows 8 and Server 2012
    • ►  July (1)
    • ►  May (1)
    • ►  April (2)
    • ►  March (2)
  • ►  2011 (14)
    • ►  November (3)
    • ►  October (2)
    • ►  May (1)
    • ►  April (1)
    • ►  February (3)
    • ►  January (4)
Powered by Blogger.

About Me

Unknown
View my complete profile