Quantcast
Channel: Products – 1E Enterprise Software Lifecycle Automation
Viewing all 178 articles
Browse latest View live

1E WakeUp – How It Works

$
0
0

I recently posted a lengthy and detailed blog 1E WakeUp Server and its AgentFinder Process. This article introduced the concept of identifying 1E Agents present on the local subnets within the boundaries of a WakeUp Server installation, and assigning two of them the roles of a primary and alternate agent for their respective subnets. The purpose of this effort comes into play when it becomes necessary to ensure clients machines on a given subnet are up and running at a designated time. This technology uses one of the remote agents as a proxy of the WakeUp server to power on the required systems using magic packets in a traditional Wake-On-LAN (WOL) scenario. This article will take the reader on from the previous article to the final stage of all of this effort. In the previous article we stated that a fundamental element of our 1E NightWatchman product is its wakeup operation is to ensure there is a proxy 1E Agent up and running at all times on every subnet. This agent then assumes a proxy function working in concert with the server-side agent to receive wakeup requests from the server and to then create and issue magic packets to its subnet neighbors.

Now that we have identified a pair of 1E Agents on each subnet (documented in the previous article referenced above), how do we ensure that there is always at least one agent up and running after hours if we are applying a power management policy to turn them off at a specified time? How are these actually used when it is time to awaken systems? Let’s take each of these questions in turn. Together they provide the end-to-end story.

Last Man Standing (LMS)

This term refers to the process used to ensure that there is always one agent up and running on each subnet to act as the proxy for WakeUp Server. For this discussion, we continue to assume we are working in an integrated fashion, where 1E WakeUp Server is installed on a Microsoft System Center Configuration Manager (SCCM) primary site server. Its purpose is to identify “when” a mandatory deployment is scheduled to execute (explained extensively in the previous post, above). At the scheduled execution time, the WakeUp Server component derives the “list” of all machines in the target collection of the deployment, including all of the data needed to create a magic packet. It then determines which agent(s) on a given subnet are, or should be, up and running to receive this data and then create and execute the magic packets to the target systems. The problem here is simple: how do we ensure that one or the other of the previously discovered and assigned primary and alternate machines remain on at all times? If one of the two is powered off, we need to ensure that the other is powered on. Whichever remains on is referred to as the Last Man Standing on that particular subnet. So how is a machine powered off in the first place? Assuming there wasn’t a power outage, there are really only two scenarios: a NightWatchman power policy is applied to a location (e.g. “I want all of you systems to shut down at 6pm”); or, the user of the machine initiates a normal shutdown as they perhaps leave work early (i.e. from the Windows START menu, the user initiates shutdown). In either case, we need to ensure that one of the systems remains powered on and retains the role of the “primary” agent, as it is to this system that the WakeUp Server hands off the task of waking the required systems on the local subnet. In the first scenario, where a system is told to shut down via a policy (“It’s now 6pm. Its time to shut down”), the primary agent simply ignores the request and stays powered on. It is the last man standing. The alternate, because it is the alternate, shuts down as directed. When the user initiates the shutdown, however, things get interesting. In this scenario, the primary agent will not ignore the shutdown. Instead, it will interrogate the alternate agent. If it is on, then the role of “primary” is transferred (that system will then ignore the policy based shutdown). If, on the other hand, the alternate is in an “off” state, the primary (in the process of shutting down) will wake the alternate up. It will then assume the primary role, and it is now the last man standing. The following short video animation illustrates these scenarios clearly.

 

“WAKE UP, People! There is WORK to do!”

Now that we know how to discover a pair of systems on every subnet and assign them a primary or alternate agent role, and ensure one stays on at all times, what exactly does that agent actually do when it’s called upon by the WakeUp Server? As we discussed earlier, when an SCCM mandatory deployment is at the scheduled execution time, the WakeUp Server then identifies all the systems in the collection to which that active deployment is intended for. The list of those machines, along with the elements needed to create a magic packet (all taken from the SCCM database inventory of those machines) is then parsed out to the primary agent on each subnet involved. (note: part of the WakeUp Server install actually installs a 1E Agent on that primary server, in addition to the WakeUp Server service. The service actually hands off the wakeup list to its companion agent. That agent actually does the communications with the respective primary agents involved. I’ve omitted that small element here for simplicity). It is important to note that this process is done through simple HTTP/HTTPS. Consequently there is absolutely no impact to, or need for configuration of, any routed network equipment in play anywhere in the enterprise! The primary agent on a subnet simply receives the instruction list of those machines needed on the subnet. That primary agent then crafts industry standard WOL magic packets (using the data included in the wakeup list generated at the server side) and proceeds to send them to those target devices. Once those machines are powered on, the SCCM client agent is started, sees the existing (or perhaps receives new) policy to execute the task that started all of this in the first place, and the task is initiated. If the full NightWatchman product is in play, then once the SCCM task is complete, and assuming there is not a second task  scheduled in the near term, the shutdown element of the 1E Agent on the awakened client is automatically returned to the appropriate low power state. Simply put, this means that with the addition of NightWatchman in the environment, critical tasks like Patch Tuesday security updates can now be deployed overnight, complete with the needed “reboot” (i.e. shut down when task completed; powered on the next morning for normal operations), resulting in near 100% success and added security overnight! The following animation illustrates this process nicely.

 

 

While the above process illustrates the most common scenario, 1E WakeUp integrated with SCCM, the process of waking systems in a standalone NightWatchman Enterprise installation is essentially identical. The only differences is that the standalone installation doesn’t have the ad hoc waking action generated in concert with something like an SCCM deployment schedule. This standalone wakeup process would more likely be initiated manually by the administrator waking a group of machines from the server console directly, for example. Once initiated at the server side, regardless of how the action is initiated, the entire process of creating and  handing off the wakeup list of target machines to a local proxy is identical, as the clients also report a basic inventory upon installation which is adequate for magic packet creation.

A Few Final Thoughts

In order for any of this WOL based technology to work, whether it is our NightWatchman solution or any other, there are a few basic caveats that need to be made clear. First of all, the managed computers themselves need to be properly configured to support WOL magic packets in the first place. This occurs typically in two places: the network interface card (NIC) driver itself (typically seen under the [power management] property as shown below; and in the computer’s BiOS.

It is also worth noting the importance of the above option set to “Only allow a magic packet to wake this computer”. This will prevent any number of random bits of noise that may be present at the network port from waking the machine needlessly. If you are unsure of the ability of a given machine to respond to a magic packet, or if it is properly configured to do so, 1E provides an excellent tool in our 1E Free Tools repository called Magic Test. This provides a quick and easy means to determine if a machine will awake at all, and if it is even receiving magic packets in the first place.

The included web reporting system included with NightWatchman provides a serious amount of wakeup statistics around wakeup successes, failures, and so on. Armed with this data, together with native SCCM deployment success reports, the Administrator is provided comprehensive information about the environment and its general condition related to WOL activities.

Lastly, there are also scenarios where no WOL tool will ever be able to wake a machine. In the LMS paragraph above I mentioned a power outage, even if later restored. Likewise, there is the scenario of a user shutting down a machine in a “less than graceful” way: pressing and holding the OFF button! In each of these scenarios, the machine is left in a state where the NIC is totally dead. There is no power applied at all. Consequently, the NIC is no longer able to monitor its network jack to “see” and process a magic packet that may be aimed at it. Consequently a system in this state cannot be awakened. You can easily determine this state by physically looking at the NIC’s Ethernet jack and not seeing the telltale flickering green and yellow lights. In these situations, there is nothing to be done, unfortunately.

This article, together with my earlier post 1E WakeUp Server and its AgentFinder Process, provide the definitive overview of the underlying process behind the WOL portion of our 1E NightWatchman Enterprise power management offering. It provides the enterprise with a powerful systems management capability by deploying software off hours, including application of a reboot, with no disruption of the end user. Ad hoc, one-off, wakeups may also be initiated by a help desk technician for remote access to a user machine if needed. In my next article in this series I will address the process whereby a user outside the organization (when at home, for example) is able to wake his or her machine, even after the evening’s scheduled policy shutdown has occurred. This then provides a simple means for remote RDP access to the work computer from anywhere. This capability is implemented via NightWatchman’s component known as Web WakeUp.

Ed Aldrich | Solutions Engineer

You can follow 1E and wider-industry news and events via FacebookGoogle+, LinkedIn and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left.


Anti-Virus considerations / recommendations with Nomad

$
0
0

Anti-Virus considerations/recommendations with NomadAnti-Virus Background:

Anti-Virus solutions are common in all environments these days. Hard to find a corporate machine that isn’t running some form of antivirus/protection software. Some Anti-Virus solutions include a client side firewall. The client side firewall allows you to create and deploy firewall policies to your desktop/laptop fleet. These policies will include rules that will define what traffic can pass through the fleet. The traffic coming in or going out will be evaluated against the firewall policy and will then be determined if the traffic is able to continue or is it to be blocked. The policies can be defined by executables, ports, IPs, etc. Any traffic not defined in the policy will be blocked, or conversely any policy defined for blockage will be blocked. With that said, what ports, executables, IP, etc. need to be unblocked via a client side firewall rule when deploying Nomad in an environment?

Nomad in the Environment:

Once Nomad is installed on a PC that is running the ConfigMgr client, it becomes the Alternate Content Provider. Alternate content provider is the framework within ConfigMgr that allows 3rd party solutions to play the role of the content transferor. Nomad Branch has to be enabled on the package, applications (Client Settings  – ConfigMgr 2012), software updates (Client Agent, Software Update – ConfigMgr 2007 or Client Settings  – ConfigMgr 2012) or task sequence.  The fact that Nomad is installed on a ConfigMgr client does not automatically invoke Nomad as the Alternate Content Provider. When a package, application, software updates or task sequence is deployed to a Nomad enabled client, the Nomadbranch.exe service will broadcast an election request over UDP port 1779 for the content that package, application or task sequence has associated with it.  The other Nomad enabled clients on the same subnet will listen and respond on UDP port 1779. Once the Nomad clients elect the master, the Nomad peer will connect to the Nomad share of the elected master via the SMB protocol. If the content is not on the local subnet and ActiveEfficiency is not used, the client then uses the protocol defined on the ConfigMgr Distribution Point (DP) to obtain content.  HTTP or HTTPS protocol will be used for ConfigMgr 2012 Distribution Point (DP) or BITS-HTTP enabled ConfigMgr 2007 Distribution Point (DP).  ConfigMgr 2007 standard Distribution Point (DP) will use SMB protocol to transfer content.

If ActiveEfficiency is in use, then after the election is held and no client on the local subnet has the content, the ActiveEfficiency database is queried. The call to ActiveEfficiency is made on TCP port 80. ActiveEfficiency holds information about a site. A site is defined as a group of subnets. Once ActiveEfficiency is queried, it can be determined if there is another Nomad client within the defined site that has the content regardless of the subnet. If content is on another Nomad client that client is elected as the master and the content is downloaded via the SMB protocol. If the content is not at the site then the client connects to the ConfigMgr Distribution Point (DP) and downloads the content using the protocol defined on the ConfigMgr Distribution Point (DP).

Nomad Ports:

Table 1.

Application/Process Port Action
NomadBranch.exe 1779-UDP Broadcast
NomadBranch.exe 139-TCP Content Transfer by SMB
NomadBranch.exe 445-TCP Content Transfer by SMB over TCP
NomadBranch.exe 80-TCP (HTTP) Master downloads from DP
NomadBranch.exe 443-TCP (HTTPS) Master downloads from DP
NomadBranch.exe 1779-UDP LsZ File Request/Response

 

Anti-Virus Firewall Exceptions:

A firewall rule allowing traffic from the NomadBranch.exe on UDP port 1779 will need to be created on the client side firewall the running on the PCs. The Nomad Branch installer will open up UDP port 1779 within the Windows firewall. The recommendation is to allow NomadBranch.exe and File and Print sharing and not define ports. If your network team requires additional security the ports are listed in table 2.

Symantec Endpoint Protection example firewall rule:

Table 2.

Actions Executable UDP Port TCP Ports
Allow NomadBranch.exe 1779, 137, 138 139, 445, 80, 443

 

Anti-Virus File Scanning Exceptions:

1E best practice is to exclude the Nomad cache folder (C:\Programdata\1E\NomadBranch – Windows 7 and later or C:\Documents and Settings\All Users\Application Data\1E\NomadBranch – Windows XP) from the on access scan. Microsoft also recommends excluding the ConfigMgr Client cache folder, ccmcache, from the on access scan. 1E recommends to exclude the ConfigMgr Client cache folder, ccmcache, due to the fact that Nomad will hard-link the contents of the Nomad cache folder to the ConfigMgr Client cache folder, ccmcache. On Access Scanning the Nomad cache will slow down the ability for a Nomad peer to download the content in a timely manner. It is however OK to scan this location during a scheduled scan. It is also recommended to take in mind deploying packages, applications, software updates or task sequences during times where the clients would be performing a scheduled anti-virus scan. The time it takes a Nomad client to download the content from a peer during this time could will be significantly higher than normal.

Content/File Integrity:

One question that is asked most often is, “what if one of the files in the Nomad cache location is compromised in a malicious way.” Nomad limits this potential malicious intent by the use of .LSZ files. .LSZ files are created on the ConfigMgr Distribution Point (DP) the first time content for a package, application or software updates is requested. The .LSZ file contains detailed information about the content of the package, application and software updates. The first thing a Nomad master does is request and download the .LSZ file for that content. The .LSZ request happens on HTTP port 80 or HTTPS port 443 depending on what the SpecialNetShare value on ConfigMgr Distribution Point (DP) is set to. When an .LSZ request is made and the request contains a hash, Nomad client computes a corresponding hash of the content on its disk. If the computed hash does not match the hash by the Nomad client, the content is considered invalid (hash mismatch). If the hash matches, the content is considered valid. For valid content, a normal valid .LSZ is generated. Nomad clients download the LSZ from the ConfigMgr Distribution Point (DP) and check the content and hash are valid. If errors exists during the content validation and hashing, the Nomad client terminates the download immediately. If there’s no such error, the download proceeds normally. This way, any content that is invalid (i.e. compromised) is prevented from being downloaded by the Nomad client.

Other Network Considerations:

PXE Everywhere is often deployed as part of the Nomad Branch. PXE Everywhere will replace the need for a PXE enabled Distribution Points and Windows Deployment Services (WDS). It will also replace the need for IP Helpers in the environment. PXE Everywhere communicates on UDP port 69 (TFTP) and UDP port 4011 (BOOTP). Ports 69, 4011, and UDP 2012 will need to be allowed within the firewall. The PXE Everywhere client will also communicate on HTTP port 80 back to the PXE Central Server. This is usually installed on the ConfigMgr Primary site.

PXE Everywhere Ports:

Some vendor-specific security features such a rogue DHCP protection can interfere with PXE Everywhere’s communication. Table 3 lists the PXE Everywhere ports and the communication that occurs on them.

 

Table 3.

Application/Process Port Action
PXE Everywhere 67-UDP DHCP Request BOOTP Broadcast
PXE Everywhere 68-UDP DHCP Reply BOOTP Broadcast
PXE Everywhere 2012-UDP Master Election
PXE Everywhere 69-UDP Boot Image Download TFTP
PXE Everywhere 80-TCP Query PXE Lite Central for OSD Ad

References:

http://social.technet.microsoft.com/Forums/en-US/753bddc0-0147-4b9a-901c-94e55d024850/sccm-2012-antivirus-exclusions-for-servers-and-workstations?forum=configmanagergeneral

http://www.systemcenterblog.nl/2012/05/09/anti-virus-scan-exclusions-for-configuration-manager-2012/

Robert Cummings | Senior Consultant

You can follow 1E and wider-industry news and events via FacebookGoogle+, LinkedIn and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left.

1E Web WakeUp – Users CAN Wake Computers from Anywhere!

$
0
0

Background

Recent articles in this series provided you with an overview the entire process of how the wake-on-LAN (WOL) portion of our PC power management solution NightWatchman works. This article assumes you have read those background articles prior to this document. In the first article, titled 1E WakeUp Server and its AgentFinder Process, we described the process of creating the fundamental components needed throughout the enterprise to allow waking machines when necessary, with no changes needed on the routers. The next article, titled 1E WakeUp – How It Works, went on to show how the actual wakeup process works in the Systems Management scenario using System Center Configuration Manager (SCCM) mandatory deployment schedules as the events used to awaken target computers. We also mentioned that the wakeup process could also be used on a one-off, ad hoc, basis by administrator that have access to the SCCM or NightWatchman consoles, such as help desk or desktop engineering techs who need to wake systems.

This last one-off wakeup scenario is at the root of what we also implement for the enterprise via the Web WakeUp portion of the NightWatchman solution. This is the piece that answers the very common concern voiced by end users when they hear of a new power management solution coming which will turn their computer off after normal working hours. “What??? You can’t do that to my system! I need to access my computer from home all the time to do MyVeryImportantWork!!” Web WakeUp is the answer to that concern.

What will Web WakeUp do for me?

Web WakeUp enables the wakeup of specific computers via a web site. As described earlier, it is primarily aimed at the end user who needs to access their work computer outside office hours from a remote location, such as from home. The concept works equally well regardless of where they may be. All that is required is the ability to access the corporate network (and thereby ultimately hit the Web WakeUp web page), usually via a VPN or DirectAccess technique. Additionally, Web WakeUp integrates with NightWatchman to provide computer search and status capabilities, and makes use of the previously documented 1E WakeUp components and processes.

Web WakeUp enables computers to be woken up outside office hours and from off premise. This allows computers to be turned off when not in use, using NightWatchman for example, thereby saving power. At some later time they can be woken up by a known user whenever needed from wherever they are. Web WakeUp provides a simple interface to ensure that even non-technical users can get their work computers up and running when needed.

Web WakeUp Features

Web WakeUp is a web application that integrates with NightWatchman Management Center to use 1E WakeUp to enable users to wake specific computers via a web page. The core features are:

  • Increased scalability and performance – Web WakeUp is able to utilize multiple WakeUp servers to allow scalable wakeups in enterprise networks.
  • Multiple registered computers – individual users can register up to 20 computers that can be awoken using a single click from the Web WakeUp website
  • Web site control – administrators can configure the web wakeup pages that are presented to end users
  • Corporate branding – the website can be easily changed so that the appearance suits your corporate needs. You can add links to wake specific computers to you own sites, such as the company intranet
  • Web WakeUp for iPhone and iPad – Web WakeUp is available as an iPhone and iPad application that can be downloaded from the Apple store
  • Support for mobile devices – Web WakeUp lets you wake PCs from your Android, Windows Phone, Blackberry, or iPhone mobile device
  • Remote desktop link – Web WakeUp provides the convenience of a remote desktop RDP link which can be used to connect to your PC after a successful wake up
  • Locked-down security – Web WakeUp lets the administrator register specific users who will be able to use the system to wake computers. Without the appropriate authorization users will not be able to search for or wake systems. By default, Web WakeUp allows anyone access to the wakeup capability.
  • Enhanced computer search – Users can search for computers using domain\username combinations thereby increasing the compatibility between Web WakeUp and enterprise networks.
  • High accuracy – Web WakeUp is able to resolve local computer names without relying on DNS. This is accomplished via an ActiveX control added to the client browser on first access.

 

How It Works

 

The following short video illustrates the very simple, 3-step process a user follows to access their computer remotely.

 

 

Recapping the end-to-end process that just took place in the video:

  1. While in the office, the user accesses the Web WakeUp web page, and “registers” their computer. They don’t even need to know what that often obtuse name is, as the web page shows them (using the ActiveX control described earlier). Once they complete the registration, there is a relationship established between the user ID and computer name (or names, if multiple systems are registered)
  2. When the need to access the work computer arises from a remote location, they simply authenticate to the domain in the usual way, and access the Web WakeUp URL. The web site, seeing the authenticated user ID returns the previously registered computer associated with that user. The current status of the machine is also determined by Web WakeUp and presented to the user (generally turned off). The user then simply clicks
    the [Wake up] button, whereupon Web WakeUp communicates with the NightWatchman console and initiates the wakeup process for this one device (see earlier posts for a refresher on these basics if necessary). This then becomes that one-off ad hoc action described earlier. It is initiated via the web infrastructure instead of a help desk or desktop engineering tech. Once the action is initiated, Web WakeUp monitors the startup process via ping activity. When the device responds to a ping, it is reported as awake to the user
  3. Now that the device is up and running, and this is reflected to the requesting user via the Web WakeUp interface. The user may then proceed as desired to actually establish a connection to the desktop, typically via the RDP link also provided via Web WakeUp.

     

But wait! There’s more!

 

Throughout this article and the video, we focused only on the [Register] and [My Computers] options in Web WakeUp. Below we see the two remaining options not discussed.

 

[Wake Up Computer] provides a quick and simple means to wake a machine that the user already knows the name of:

 


[Search] is simply that: the user needs to search for a machine or their user name, even when the entire name may not be known, and once found initiate a wakeup:

In Summary

It should be clear that, taken with my two previous articles, and augmented by this last article, the overall WOL functionality provided by our NightWatchman solution is a highly flexible means to wake any system or systems, in a wide variety of ways and times, and from any location desired. I sincerely hope you’ve enjoyed this article, and will give serious consideration to 1E for all your systems management needs!

Ed Aldrich | Solutions Engineer

You can follow 1E and wider-industry news and events via Facebook, Google+, LinkedIn and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left.

Reducing waste the old fashioned way

$
0
0

Reducing waste the old fashioned wayOver my Christmas holiday vacation I visited a coal mining museum with my family and much of what we learned in that mine, nearly 800 feet under the ground are applicable to everyday life and how we are in constant strive to improve.

The examples that I cite are the machines that were designed in Germany and in the USA during the 20th century to automate not only the drilling of the coal from the bed but also in the extraction of the mined coal in readiness for selling on. Back in in 19th century this whole process was the role for many families, from the parents who would be deep in the mine in small 24 inch high tunnels on their hands and knees, scrabbling with small wooden tools to effectively hack at the coal seam, and then passing it out to their children (as young as five years old) who would move the coal out in one tonne carts to the mine head for payment. Obviously, these automation machines would significantly reduce the need for the whole family to be down in the mine but could also cultivate significantly more product per day making everybody richer along the way.

Now, if we take a look at modern IT systems, a common issue that many businesses face is the cost associated with running the business or ‘keeping the lights on’. With many huge datacenters of equipment churning away hour after hour, day after day, it is clear that there is a lot of electricity being used just to keep these lights flickering, the disks spinning, and the networks alive. If we take a look out onto the office or the shop floor, again we see many PCs sitting powered up waiting for their moment of glory when their associated user(s) arrives at work in order to begin for the day. The same of course is said for lighting and air-conditioning, all sitting patiently for those all-important users to show up in readiness for work. All of these things are consuming valuable resources, and as a result are costing both the company in question and the greater economy as a direct result.

When I took on my very first job out of college I recall the building that the business was located within was a 1960’s style building with reasonably recent renovations applied to it, but the thing I recall most of all (after the hot beef sandwiches in the canteen on a Friday of course) was the fact that every single light switch had a little plaque next to it that simply stated “Does this NEED to be on?” A simple, yet important notice to empower the user to help save the business park a little money in electricity but with a louder implication about the environment (the plaque was also adorned with images of trees and flowers around the bottom). The point here being that I cared less about the business park electricity bill but would more than likely be concerned over a wider environmental impact of keeping unnecessary lights on.

We all have a responsibility to be more aware of ourselves and the environment so why not start today and begin thinking about whether your lights, computers and other electrical appliances actually need to be always drawing power from the grid?

Beyond this, why not take a leaf from the miners from the 20th century as they realized that they could automate many of the manual operations and therefore make huge operational cost savings and at the same time remove the risk to the younger families by removing the need for them in the mines?

At 1E, we have been helping businesses automate their cost reductions by managing the uptime for their desktop estate, in the process reducing their carbon footprints and costs to keep their IT lights on. So, if you want to learn more about how you too can reduce your carbon footprint and costs related to electricity usage, drop us a line and we would be only too happy to help. We promise not to tell you more about 19th and 20th century coal mining.

Simon Rust | Director, Product Management

You can follow 1E and wider-industry news and events via FacebookGoogle+, LinkedInTwitter, and via V1Ewpoint, our monthly newsletter.  To discuss any issues relating to this article with our experts, email info@1e.com, or visit our LinkedIn forum, 1E INSIDEV1EW.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left. Thank you.

Reducing your carbon footprint, the easy way

$
0
0

Reducing your carbon footprint, the easy wayFollowing on from my last article that talked about the coal mining industry and how the automation reduced risk and increased output within the mines during the 20th century, I thought it prudent to go on to discuss how we can use these lessons to reduce our own carbon footprints in an easy manner, reducing our costs as we go.

At 1E we have been helping customers significantly reduce their carbon footprint while saving millions of dollars along the way for the last 18 years. During this time, we have devised a simple methodology that we use together with our customers that is known as AOR (Analyze, Optimize and Realize). There is more information available here on AOR, but in essence we took our domain expertise, amassed over those 18 years and built this into the world’s number 1 PC energy management solution, NightWatchman Enterprise. The software is placed into a reporting mode where it takes no corrective action across the PC estate, but during this time records usage and up time of the equipment. We therefore Analyze the current environment via quick and easy reporting, made across the estate, to demonstrate how much energy is being used and thus how much is being wasted. At this time, should you wish you can engage with our Financial Analyst team who can delve deeper into the statistics to further determine overspend beyond the controls of just the power usage to help you really grab a hold of IT spending.

Once we have the understanding of what wastage there is out there, we can Optimize our environment by implementing a PC Power Policy with the NightWatchman Enterprise solution enabling the user population to continue to make use of their PC as and when they need, but for the business to be in control of when the PC is able to go to sleep and thus begin the carbon footprint reduction.

The final stage to the methodology is to Realize the savings by implementing the PC policy across the desktop estate, thus initiating the software to automatically manage the energy usage per desktop.

The 1E team then continue to work with the customer to ensure an ongoing optimization process takes place as part of the AOR methodology, ensuring that as time passes not only do those initial savings get realized but that further additional savings will be assessed and managed, adding to the total as each year passes by.

The more PCs that are in the enterprise desktop estate, the greater the potential saving of course. So for example Arup, a leading structural engineering firm, makes savings equivalent to the annual greenhouse emissions of 81 modern passenger vehicles or the energy use of 40 homes in one year, both of which represent a saving of approximately 442 Metric Tonnes of CO2 per year. Within the business Arup already encouraged the staff to shut their PC down when not in use, but despite implementing this policy the organization found that there were still more than 25% of PCs left on overnight during the week and 20% during each weekend. Today, NightWatchman Enterprise ensures that the desktops are shut-down overnight when not required (but are awoken to patch during the night before being put back to sleep) and awoken just before the user has a need to use the device in the morning, saving boot time.

Similarly Aviva, the world’s 6th largest insurance group (with almost 54 million customers worldwide), are equally committed to corporate responsibility, resulting in a requirement to reduce their carbon footprint. Aviva found that despite best efforts, over 60% of PCs were left powered on overnight during the week and 57% at the weekends, so an implementation of NightWatchman Enterprise enabled the company to save more than 5 million kWh (Kilowatt hours), in turn representing 2812 metric tonnes of CO2 per annum, helping the business exceed their corporate responsibility goals on an annual basis.

So, as can be seen, reducing your carbon footprint is as simple as AOR. 1E can help you understand the energy usage across your enterprise estate and help you plan for a better tomorrow with savings of typically $26 per PC per annum. Come and read more about NightWatchman here or simply contact us at info@1e.com for more information. We would be delighted to talk with you and see what we can do to help.

Simon Rust | Director, Product Management

You can follow 1E and wider-industry news and events via FacebookGoogle+, LinkedInTwitter, and via V1Ewpoint, our monthly newsletter.  To discuss any issues relating to this article with our experts, email info@1e.com, or visit our LinkedIn forum, 1E INSIDEV1EW.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left. Thank you.

NightWatchman v7.0 upgrade – 10 Questions That Customers Ask

$
0
0

NightWatchmanVersion 7.0, the latest release of NightWatchman, 1E’s PC Power and Patch Management solution, contains many new features, enhancements, and bug fixes to improve the performance of the product and its components. Within months of its release to the market, we already have had a significant number of customers upgrading to the latest version. Such customers often reach out to us with a bunch of upgrade related questions. In my role within the 1E Support team as a support engineer, I have been exposed to quite a lot of them. From my own experiences, and with the help of some deep-dive trend analysis, I have hand-picked 10 of the most popular questions we get asked by NightWatchman v7.0 customers. Here’s the list:

 

1. Where do I get the latest version of NightWatchman?

NightWatchman v7.0 is available in the “software downloads” folder on the 1E Support Portal. If you have access to the portal, you can download it here,

https://supportportal.1e.com/kb/index.php?View=files

All future hotfixes/updates for the version will be available in the same folder as and when they are released.

 

2. Do I need a new license key?

Yes – Indeed! NightWatchman v7.0 is a major release. All major releases for all 1E products require new license keys.

License key requests are now handled directly by our dedicated Sales Operations team. Send an email to salesops@1e.com and get your new keys generated.

 

3. What if I have used the trial key to install it? How do I relicense it?

To run the 1E Agent beyond the 30-day license period, the product needs to be licensed.

If WakeUp Server is initially installed using the 30-day evaluation license, it can be relicensed later, once you have the full license, by running the following command line (from an elevated command prompt):

wakeupsvr.exe -relicense= ABCD-1234-5678-8765-4321

Similarly, for the 1E agent, the command is:

nwmsvc.exe -relicense=ABCD-1234-5678-8765-4321

In each case, ABCD-1234-5678-8675-4321 is the original license key.

*It’s important to note that the license key should be entered manually. Please do not copy-paste.

If you are not running with a time-limited license and you see the license expiry notification on NightWatchman component service startup, contact 1E customer support with the license key used.

 

4. What are the supported NightWatchman versions for upgrade?

The 1E supported software upgrade path for NightWatchman is from v6.5 to v7.0, as these are the ‘in support’ versions. We do not anticipate any issues when upgrading from v6.1 but this would need to be “best endeavors support” as this version has been out of support since April 2014.

 

5. Do I need a separate server to host ActiveEfficiency? Do I need the “Scout” if I don’t use AppClarity or Shopping?

NightWatchman 7.0 uses ActiveEfficiency to synchronise cloud-based data from 1E, such as power consumption figures for different hardware models. NightWatchman can be installed without ActiveEfficiency Server; however, data synchronization with 1E is not possible without it.

If ActiveEfficiency is only being used for NightWatchman Enterprise you may install it on the same server. However, if you have (or intend to have) other 1E solutions implemented, you should host it on a separate server adequately configured to support the functionality required.

NightWatchman does not use any of the data that gets collected by the Scout, so the Scout is not required if ActiveEfficiency server is only being used with NightWatchman.

 

6. We’re planning on installing NightWatchman 7.0 on a new server. How do I backup and restore the database?

This is perhaps the most popularly question asked irrespective of the version of the product. The NightWatchman Management Center database, AgilityFrameworkReporting (named such for historical reasons), holds all the information returned by the NightWatchman clients and the WakeUp agents and forms the basis for the reports. We already have a detailed public facing knowledge base article enlisting the recommended steps. Here’s the link:

https://supportportal.1e.com/kb/index.php?View=entry&EntryID=13604

If you need more information on how to backup and restore databases on SQL server, you may visit the following link,

https://technet.microsoft.com/en-us/library/ms187048(v=sql.110).aspx

 

7. Do you provide legacy support for previous agents if we upgrade NightWatchman Management Center and WakeUp to V7? Will V6.5 and V6.1 agents still work?

Although we strongly recommend a complete upgrade (both server and client side) for NightWatchman v7.0, in case of a constrained timeline you can simply upgrade the server components. The clients must be configured with the name of the server hosting the NightWatchman Web Service (AFWebService) component which they will post data to and retrieve policy from.

v6.5 agents will work. 6.1 agents might work but they haven’t been tested since the version is out of support.

Technically it will work. That said, you should be looking to upgrade your clients soon, as the older clients may not be able to take advantage of newer features available.

 

8. For the server, how do I uninstall our current installation of NightWatchman?  Also, is there any potential for damaging our SCCM installation/functionality by removing 1E? 

Your current installation can be uninstalled from the control panel on the server. That said, if you want to preserve the historical data, a backup of the AFR database is definitely recommended. The console that you see for NightWatchman is just the front-end of the application. Uninstalling/reinstalling it doesn’t cause any problem as long it is pointing to the correct SQL server where the database is hosted. You can always install a new version and point to the database. If you are not worried about preserving the historical data, you may well uninstall and start from scratch.

Removing NightWatchman/Wakeup will not damage your SMS/SCCM installation.

 

9. What precaution do I need to take when I uninstall the server components? Will our users get any kind of warning messages or pop-ups from the 1E agents?

NightWatchman Console, Wakeup Server, and 1E agent can all be removed from the control panel. Uninstalling 1E Web WakeUp may leave the “1E Web WakeUp, Import Authorization” SQL batch job.  This can be removed manually following the uninstall.

Uninstalling the 1E Agent will not show any warning messages to the end user.

 

10. I don’t see a new version of Enterprise View. Do I use the old one?

Yes! There isn’t a new version of Enterprise view released alongside NightWatchman 7.0. You can continue to use the version released with v6.5 while upgrading the rest of the components. The older version for Enterprise view has been tested to work seamlessly with the latest release of NightWatchman.

 

We hope you found this informative and hope to do more of these types of posts in the future for our other products.

Ashutosh Tripathi | Product Support Engineer

You can follow 1E and wider-industry news and events via FacebookGoogle+LinkedIn, myITforum, and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left.

App-V – to Stream from a DP or not to Stream, is that a question?

$
0
0

App-V - to Stream from a DP or not to Stream, is that a question?I’ve had a few engagements in recent months where conversations on how App-V is delivered to endpoints via Configuration Manager was misunderstood a bit, and at first, I was among the misguided.  These talks found me digging into how CM manages App-V content and, subsequently, how data is made available to the App-V client via a Distribution Point instead of an App-V Streaming Server.

I will address this topic at a base CM perspective and mostly leave 1E technologies out of it.  Also, I am placing a good bit of humor in here so the read is hopefully informative and slightly entertaining but, make no mistake, this content has been researched, tested personally, and verified in active customer environments.  Most importantly though, I submit this information with the request that anyone who reads it will give their thoughts as good members of the systems management community.  I also composed this because if you are getting ready for App-V in your environment, focus on the stability this technology gives applications and don’t focus so much on streaming (echoing a great blog article from Tim Mangan where he basically says what I am pointing out).  Please challenge the content if you find a gap or support the time spent by telling me if it helps.

Before you jump in, here is a list of what you will get from this read:

  • How App-V is handled by CM management components
  • Options for deploying App-V apps
  • How the CM client manages streaming content vs locally cached content
  • How the App-V client interacts with streaming vs locally cached content
  • Main example of when streaming is the right idea and why
  • Reasons why streaming in a non-virtual environment introduces more cons than pros

OK lets go…  After software has been packaged up in App-V and added into CM, the application files are loaded into the single instance file store then placed on targeted Distribution Points.  Just like staging any other Application in CM is how you should think of it.  So instead of thick installation files, a packet of data the App-V client will leverage to present a virtualized instance of the software is stored.

Now on to the how it is doled out, that is where this gets interesting, well as interesting as App-V gets anyway (to me very interesting and, yes, I need to get out more).  This is the place where I initially got foggy on how things work, what App-V is really doing, and what each option really gets you.  I will explain using two scenarios which are your options in CM for deploying, verbatim:

  1. Stream virtual application from distribution point – (this is described in line with a “Required” setting for deploying, if a Deployment was set to “Available” events A and B would happen right after a user triggers an install action in Software Center) –
    1. CM Deployment download action (event tied to the “Available Time” in a “Required” deployment) – CM provides a client with the virtual application’s manifest file, icons, and framework .osd files via BITS (this content is almost always tiny unless you are doing some crazy packaging) placing into CM cache.
    2. CM Deployment installation action (event tied to the “Deadline Time” in a “Required” deployment) Abovementioned files are passed to the local App-V client which then places the main program icon somewhere so an end user may launch the app like any other.   The CM client is now done with this transaction and BITS out of the picture I would keep a mental note of this BITS being out of the picture thing too BTW.
    3. End-user launches App-V application via placed icon – The App-V client knows where to go to stream that app, from it’s friendly neighborhood DP through the App-V client, and after a small delay (this small delay will happen the first time the app is launched no matter whether it is streamed or locally cached but the streaming deal usually is a bit longer) the application opens for the end-user and content stream is initiated.  The App-V client will stream prioritized content to align with the modules of said application which are in use by the user.  No matter what portions of the app are in use, App-V streams down the entire content for use after this first execution and vola you have a virtualized app installed fully!
  2. Download content from distribution point and run locally – (same thing, explained with “Required” Deployment in mind) –
    1. CM Deployment download action (event tied to the “Available Time”) – CM provides a client with the virtual application’s full content via BITS into CM cache.
    2. CM Deployment installation action (event tied to the “Deadline Time”) – Abovementioned files are passed to the local App-V client which then places the main program icon somewhere so an end user may launch the app like any other.
    3. End-user launches App-V application via placed icon – App-V client streams (WHAT? STREAMS? Yes it is still streaming) the application locally from CM cache to the local App-V client and after the same small delay, the application is open and running while being instantaneously streamed to App-V at the same time.  Now you, my friend, have an installed App-V application, go tell all your friends.

**This detail was composed so a complex scenario is more easily consumable, for more/full, detail please refer to Microsoft’s Whitepaper on App-V & CM

So now you may be coming to the same realization I did when I learned how this worked end to end.  What am I really getting when streaming from a DP?

Well a lot if, you are truly going virtual, and by that I mean you are streaming applications within a properly implemented virtual PC environment with non-dedicated OS instances (meaning when a user opens a virtual PC, it is a fresh OS and personal data is network hosted not a virtual PC assigned to them personally – I managed this type of environment long enough to know you don’t wanna do this if you can avoid it and business can be aligned).  Additionally, this virtual PC would have been provisioned with limited disk space, resources, and you would truly want portable applications as this is propose built/scaled.  Guess what else you would have in this environment?  That’s right, a DP sitting right next to it in the same datacenter and subsequently, with extremely high connectivity.  So in this scenario, streaming is super cool and very applicable to use case.

Now, let’s talk about what I have seen out there in the wild when I hear about shops which have implemented App-V streaming (and also happen to talk to their users) in an office environment with a DP on that local LAN.  Spoiler alert, it’s a bit quirky/risky and virtual app performance has some hiccups, let’s go over why:

  • Top point is, essentially, these environments are setting the stage for a data transmission storm on their local LAN as the App-V client goes after streaming content during production hours since this data acquisition is only triggered when the user first launches their app.  Now, this is a highly connected LAN network so that should be fine right?  Well don’t assume that all other bandwidth needs rocking around that local area will perform as your SLAs would hope.  In short, don’t risk hampering other daytime data needs when you can easily cache the App-V content locally to systems during maintenance windows in a controlled fashion with a result of extremely high availability App-V streaming (meaning the App-V client streams from the local hard drive, because it doesn’t get faster than that my friends).
  • App-V clients do not throttle/manage bandwidth consumption and even if this is on a local LAN, I would not recommend giving any process carte blanche when you have desktop class PCs which should have ample disk space to handle the cached content in CM.  Not to mention the fact that once this content is pushed into the App-V client, the CM client may do its thing and clean up in edge cases.  While I’m at it I will share that I had my hopes dashed as the App-V client does have registry settings which pertain to something which could relate to throttling.  However, they do nothing as they are obsolete leftover from old versions of SoftGrid.  Additionally, I’m not going to go so far as to say the native CM client and BITS throttles bandwidth either, however you at least can rate limit the client.  How inefficient that approach is, choking your ability to provide CM content in general, is a topic for another day though.
  • Streaming App-V in this type of scenario is pretty much like deploying software using “Run from DP” and that always works 100% reliably right?? (please note sarcasm, some of you may have this working really well, I never had a lot of luck when looking for a most reliable option).  Well in this App-V scenario, you are placing end-user usage of that application in that mix.
  • You probably don’t only have desktops in play and setting up separate deployments for laptops vs desktops is a lot of work for no good reason as far as I have been able to discern.  I say this because you don’t want the instance where a laptop gets the manifest for a streaming app then goes home and connects to VPN…..
    • Best case scenario the App-V application content was not stored on your VPN supporting DP, this means the user is left in a ditch but it’s better than the next scenario.
    • Worst case the content is on the DP supporting VPN and you now have a crazy App-V client losing its mind with your WAN.
  • App-V client does not throttle/manage bandwidth at all, just want to point that out again.  Here is some math around that daytime application demand – 80 PCs launch a virtualized MS Word Viewer mostly all at 9 am, app size of 100 mb means 8 gig moving around that LAN during the day taking all the bandwidth it can, and that’s just a low level example.
  • Lastly, if this is being used within an enterprise PC fleet, stay away from unneeded risk of application responsiveness issues because they will exist albeit possibly not the majority of experiences, some users will not like it, and we all know how that perception of IT services works in those scenarios….”We have gotten reports from business leadership that your App-V solution is not working for users”.  Don’t pretend like you have not heard a response like this in the past concerning other things which you know relate to a small handful of issues.
  • Bottom line, production bandwidth needs are best kept as predictable as possible when it comes to IT products and services.  This means if you have the ability to push any consuming processes into a non-production time window, do it.
  • Perceived pro with streaming is that applications are instantly updated when updated on the DP.  This is not actually the case, all you really do here is initiate a streaming of content as soon as the user triggers the application which is another opportunity for latency for both App-V application and bandwidth consumption overall.  Alternative here is to simply plan your application updates so they may be supplied to devices during off peak hours.  This also allows you to update in a phased manner, remember, just because you can update an application on all devices at the same time it does not mean you should.  Probably the same reason you do not have an entire environment update an application at the same time, because not all issues with software deployment are related to installation issues.  Compatibility challenges and end-user questions are a good portion of that and this would happen all at once.
  • In case you didn’t notice, when streaming in CM, Feature Blocks are downloaded entirely upon first launch so the seemingly appealing aspect of App-V only pulling blocks it needs when streaming is not present when leveraging CM apparently.  Not that I think this is good anyway, since the as-needed streaming means you have clients downloading content in even more of an unpredictable manner.

Now, I will end with saying that Nomad fully supports DP transmissions of App-V content to CM clients and that means everything I am talking about above would have the added benefit of obliterating the value BITS presents when supplying App-V applications to an entire estate as a single, reliable, and readily available service no matter where clients are sitting (that’s kind of our thing, just saying).

Shawn Cardamon | Solutions Engineer

You can follow 1E and wider-industry news and events via FacebookGoogle+LinkedIn, myITforum, and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left.

How much energy is YOUR PC estate burning?

$
0
0

How much energy is YOUR PC estate burning?Do you have any way of accurately determining how much energy your end user computing is costing you on a daily / weekly / monthly / annual basis? Can you even guess what part of the energy bills are attributed to end user computing? Is this even something that you care about?

Of course this largely depends on what position within the business you hold but most people are sadly not in the position to be able to begin to guess at the number, let alone know the actual $$ value.

1E’s customers have been able to accurately identify these costs and as a result are even able to predict with accuracy what it will cost to run the end user computing equipment next week / month / year.

NightWatchman is the technology that can tell you exactly how much energy is being consumed across your PC estate, and when the cost of YOUR energy provision is inserted into the product, NightWatchman can tell you the exact $$ cost per PC that your end user computing estate is costing. OK great, so now I know the cost to keep the lights on but what value does that actually tell me? I can look up my energy bills and roughly figure that out so where is the value proposition of the product? The really key part of the NightWatchman technology is that not only will it advise of uptime and associated cost, BUT will also tell me how much of that time each and every PC was actually doing useful work. In other words it will accurately account for how much time the PC was being used by the user population for real work and by default then can detail how much time the PC was simply sat waiting for a user to make use of it. This is where the first part of the magic lies because this immediately gives us the wastage in energy. The second piece of magic comes from the definition of a power policy that will effectively put the machine to sleep during periods of time that the user does not actually need the PC to be awake (such as overnight while the user population are themselves sleeping), thus immediately having a hard cost saving by reducing energy use. In combination with 1E WakeUp (component of the NightWatchman Enterprise solution) each PC can be awoken just in time for the user so that the user is blissfully unaware that the machine has been sleeping while they were away.

Over the last 18 or so years, we have found that NightWatchman saves 1E customers approximately US$26 per PC per annum by automatically managing uptime. This saving is great for the bottom line and of course when combined with environmental and tax incentives for energy management, really helps with further reducing costs and of course the carbon footprint. If you take a quick look at a previous article you can see that organizations such as Aviva found that 60% of their PCs were left powered on overnight and hence were able to save just over 5million kWh per annum!!

So, back to the original question; can you determine how much energy you are burning? If you would like to not only know the details but also pinpoint means to reduce the burn, reducing your carbon footprint and energy spend, get in touch. We will be only too pleased to help you begin your journey of exploration and savings.

Simon Rust | Director, Product Management

You can follow 1E and wider-industry news and events via FacebookGoogle+LinkedIn, myITforum, and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left.


Welcome back to 1E Free Tools

$
0
0

1E Free Tools

For almost two years now I have worked for 1E and seen first-hand how much this organization cares for the systems management community. It is one of the major reasons I came here and why I am very proud to call 1E my professional home and extended family (no, seriously).  However, one of the things I remember from back in my CM admin days was the 1E which supplied free tools and wondered why they were gone and it seems I was not alone in this quandary.  I understood after interacting internally here that this was not just a simple a change within 1E, but a change in the CM community itself.

With laying MMS to rest back in 2013 (you are missed, dear friend) and the advent of Configuration Manager 2012 alongside very strong changes in how our sector of the IT industry functioned, things were and are still bit up in the air.  1E has been strongly investing in our community in a not-so-public a way for a while now and I did not fully understand it until I started my job as a Solutions Engineer here back in August of 2013.  The company is working very diligently to discern where enterprise IT is headed, influencing trend, and deeply retooling how our technology provides an amplified toolset to deliver on the ever increasing needs/wants of an elusive “end user”. Even the perception of what the true goals are in today’s world  for IT organizations has been top of mind and these discoveries have concatenated down to tremendous gains in our tech.

So while we (and I cannot tell you how proud I am to use that “we” statement BTW) have made a ton of headway which is evidenced within our technology stack.  It is now time to bolster our public community efforts and give as much support as we possibly can to efforts like the Midwest Management Summit, CM User Group involvement, and last but not least – bringing back our FREE TOOLS!  These pure, community driven efforts are what made the early years of MMS so special, everyone knows this, so we have made the conscious decision to keep things this way.

If you take a look at our free tools page, we are off to a great start but these tools need to be shared entirely, grown, and refined to better systems management.  That is exactly how we are going to approach this, meaning each of our free tools will also have a discussion section within our forum page hosted on myITforum. Within this resource, anyone and everyone is welcome to share what is working well with these tools and more importantly, what is not so we can work together to keep improving.  Lastly, tell us your thoughts on this, start a thread, and let’s find out together what else can 1E do for the community, as that is the whole point!

While we are on that point, I’ve picked out just two of the selection of fantastic free tools, which I am really excited about –

1E Enforcement (Official Release end of February 2015) – This free tool allows for “required” ConfigMgr 2012 Application Deployments to be “selectively” enforced, maintaining the ability to remove the application if required in the future, without the administrative overhead of altering the targeting method via collection exclusions or membership query-rule.  Take a closer look here, then go discuss it here.

1E Magic Test – Has your environment been configured properly for WakeOnLan?  This tool will help you find out if you are right or not then provide standard testing against any possible issues down the road.  Take a closer look here, then go discuss it here.

Shawn Cardamon | Solutions Engineer

You can follow 1E and wider-industry news and events via FacebookGoogle+LinkedIn, myITforum, and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left.

THE ‘1E FREE TOOLS’ SOFTWARE IS PROVIDED “AS IS” AND 1E DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
BY DOWNLOADING THIS SOFTWARE 1E GRANT YOU A LICENCE TO USE THE SOFTWARE, YOU MAY NOT REVERSE ENGINEER, DECOMPILE, DISTRIBUTE, RESELL OR COPY ANY PART OF THE SOFTWARE WITHOUT THE SPECIFIC WRITTEN PERMISSION OF 1E.
1E WILL NOT BE LIABLE FOR PROVIDING SUPPORT, NOR DOES IT WARRANT THAT THE SOFTWARE WILL BE UPDATED, FIXED OR PATCHED. HOWEVER SHOULD YOU HAVE ANY QUERIES OR COMMENTS REGARDING THE SOFTWARE OR THE TERMS OF USE PLEASE EMAIL INFO@1E.COM

Energy cost are dropping, should we care about PC Power Management anymore?

$
0
0

Energy cost are dropping, should we care about PC Power Management anymore?Recently, the cost of oil has dropped significantly (presently under US$50 per barrel – by way of comparison, the same barrel was over US$100 during the summer of 2014) and as a direct result we are seeing the costs associated with the provision of energy to homes and businesses drop. So, given that we are automatically getting cheaper energy provision, do we really need to care about PC power management?

The answer of course is usually “it depends” but typically businesses are not just targeting themselves on their direct operating costs, but also other measurements such as their carbon footprint, as it is common for businesses to have social responsibility directives in place to demonstrate themselves as leaders in environmental and social scenes. The fact that the provision of energy has reduced is of course great news for every business that requires energy, but this alone is not going to help with the carbon footprint reduction or other demonstrations of social and environmental leadership.

1E have been helping businesses reduce their energy spend by approximately US$26 per PC per annum for the last 18 years with some simple, yet incredibly powerful PC Power Management software known as NightWatchman.

NightWatchman is silently installed across the end user computing estate and monitors PC usage which is in turn used in combination with energy cost information to determine accurate operating costs of each and every PC across the estate. Of course, each different model of PC has a different energy burn rate (kWh – Kilowatt hours) and NightWatchman is aware of this and uses data from each PC supplier to ensure that the actual burn rates are accurate to the actual PC’s that are in use across your estate. This data is updated monthly (and securely synchronized into your business from 1E online systems) to ensure that even the most up-to-date hardware is accounted for accurately. Similarly, the business could well be using energy from different suppliers for different offices, each with its own cost model per kWh, again NightWatchman can cater to this to ensure that the data used is all up-to-date and most importantly accurate. Reports from NightWatchman can not only advise you of exact energy spend but also highlight usage patterns of the PC estate which in turn can enable a power policy (or multiple policies) to be implemented across the PC estate that can begin the automated power down of systems across the estate in order to save on energy burn / waste. NightWatchman is usually implemented alongside the 1E WakeUp solution (part of the NightWatchman Enterprise solution) that will subsequently resume the PCs from their slumber at a time that makes the individual PCs available for use just in time so that the user population are utterly unaware that their PC was sleeping while they were out of the office. Similarly WakeUp can be integrated into the PC management solution in the business to ensure that PCs are awoken out of hours (or whenever appropriate) to perform patch management and other maintenance activity to keep the security management function happy that the estate is in optimal condition, automagically.

Now, it is of course quite possible that the cost of oil will surge in the coming months, thus increasing the cost of energy provision, removing all the benefits that we are presently enjoying with regard to the lower cost of operations. In this case, wouldn’t it be pretty handy if we have implemented a simple solution to handle the present waste in energy burn, thus get us into a position that we can actually reduce our energy burn (including associated cost AND carbon footprint) and then be in a position to be able to more accurately predict running costs, while being in a better place when it comes to the management and security of our end user computing?

So, if you could not only benefit from the lowered cost of energy provision, but could further reduce your actual energy usage (and hence associated cost), lowering your carbon footprint along the way, why would this be something that was not of interest to you? If you would like to reduce operating costs, carbon footprint AND increase your End User Computing Management capabilities, why not get in touch and see just what a difference you can make, we would love to help you.

Simon Rust | Director, Product Management

You can follow 1E and wider-industry news and events via FacebookGoogle+LinkedIn, myITforum, and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left.

5 things to fix before you consider Windows 10

$
0
0

5 things to fix before you consider Windows 10Well, we talked about the five reasons not to migrate to Windows 8, but instead to skip straight to Windows 10, and since you believe that was 100% solid, let us talk now about what you need to do in order to be prepared for that jump to diez.  Or, as we like to call it within the CM community, the process of “please, oh please, do not let Windows 7 happen to me again.”

In all seriousness, Windows 7 was tough for everyone, and I mean everyone within an enterprise IT environment.  This OS migration effort was the hardest hitting to date and is one of those topics where no matter who is in the room, you can throw out the comment “good news is, no matter what we talk about, it will be less painful than migrating to Windows 7.”  That tells you something, it tells you that no one is very proud of the space our industry slipped into, not how it was handled but the level of complacency which made the task so arduous for most.  That space of being basically incapable of providing respective business partners with a proactive, agile, and lean means of keeping technically current at a foundational level.

Mind you in cases where 1E did drove said migration, the zero touch streamlining we provided removed a lot of that complexity and provided automation with enduring value for companies who accepted our help.  However, zero touch OSD automation is only part of that ever changing puzzle which is systems management or, more specifically, Software Lifecycle Automation.  Hopefully this list either supports, challenges, or brings to light items which you may carry forward when taking on your next migration effort.

Here are the five items which I feel are paramount in order to keep the sins of the past, in the past:

  1. Application Rationalization….. it’s not just a once in every five-to-ten years sort of activity.  This is the biggest challenge when it comes to being agile enough to meet the demands of the digital business.  This emerging identity of our charge as IT processionals which we are only starting to fully comprehend.  Invest the time now to develop a proper technology standards process and team up with your IT partners (security, procurement, systems management, support, and so on) to start standardizing the software accepted into your estate.  Help the business understand how this type of proactive process provides a lean environment capable of efficient change when needed with significantly lower cost and support variables.
  2. Design your OS migration with a break-fix in mind.  Take the engineering and operations design investment further than simply addressing the ability to move from Windows 7 to Windows 10.  Design a dynamic service offering which handles a device reload if necessary down the road.  Think of an environment where end users can reload their own PCs if they need to (this still will not fix a dead hard drive though)?  Moreover, imagine a world where your support staff may cut troubleshooting off at the 2 hour mark, knowing a fully automated reload is only one hour away which can be independently triggered by the end user.  If this sounds like something you can do with your own mobile device, it’s because it is what you can do with your own device and guess what?  Your end users can too and already have this expectation of you.
  3. Get your application owner house in order.  Most companies have caught on to this but for those who have not.  Do you know who is best suited to determine applicability, value, use case fit, and compliance for business critical applications?  You guessed it, the business itself!  If you do not wield the ability to expose data surrounding software footprint, major versioning, and allotted entitlement; you need to get there.  Now the option always exists to funnel all this work into a traditional SAM team but if this is your route, make sure you know who is accountable for what, and that they have the ability to enact compatibility cycles.  This component of making sure you have a reliable and easily trigger-able path to preform compatibility testing will save you months of “why can’t you do this for us IT” or “we don’t have the time or resources to invest in this activity until XX date as we have scoped other work so the business can be profitable” (sound Windows 7 migration familiar?).  This is a political time bomb if handled reactively and will impact the perception of migration success, so get the work streams to achieve this task into a business as usual run-state.
  4. Map your applications intelligently, there are a whole host of ways to do this (some good, some not so much) but weaving this into your standard application acceptance and packaging process is crucial for portability and allows software to be more of an independent and valuable entity.  The outset means that an OS migration, upgrade, and/or reload is just that, an OS function which requires ensuring user identity is retained but should not be more than that and won’t be if application mapping is well maintained as a standard practice.
  5. What technology investments will pair into a Windows 10 world best?  This is a question to start asking yourself now and discuss with your major software vendors.  If a software product today is valuable but has no line of sight to how they can take advantage of the “mobile-centric” application experience, this is a pretty good indicator of stagnancy or cause for them to provide a very good reason why not.  If the function of the application could benefit that is of course, and If no vision forward exists, you may want to look at your options now as opposed to when you need consistency during migration time.  This also is a fantastic way to let the business get involved and aid in this decision making process.  I am not saying you should ask Adobe how PDFs will amaze you in Windows 10, but I am saying if you have a mobile workforce and you are not asking these questions now, rest assured they will and it will be right around the time when the look down at their smart phone and sigh in disappointment.

So let me know your thoughts out there, this is a gigantic topic and there is no shortage of opinions but this view is provided in the hopes that the author may learn from the feedback just as much as someone may learn from the content above.  Windows 10 is a chance for the systems management community to emerge and lead the charge, amplifying how potentially complex and costly exercises like OS migration can usher in an overall IT products and services value uplift and if we do not approach as a community, we will miss out on a ton of possibility.

Shawn Cardamon | Solutions Engineer

You can follow 1E and wider-industry news and events via FacebookGoogle+LinkedIn, myITforum, and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left.

1E Free Tools Now Includes UpdateBootImage GUI

$
0
0

1E Free Tools

 

A few days ago a small utility I created, called UpdateBootImageGUI, was published to the 1E free tools site.

This utility is used internally at 1E by Consultants and Solution Engineers to help with creating the long and complex command line that is used when implementing 1E PXE Everywhere. The command line tool UpdateBootImage.exe is a component of 1E Nomad, and is included with PXE Everywhere. It is used to modify the boot wim to make it suitable for use with PXE Everywhere. Refer to Crêting a PXE boot image on ConfigMgr 2012 for details. It requires 7 distinct pieces of information to be able to execute correctly, information like the boot image package ID, the source location of the boot image, Management Points and Distribution Point that the boot image is currently deployed to.

As you can imagine, when typing all of this information into a command line, there is likely to be a typo or syntax error. This invariably causes needless time and effort to diagnose and resolve. After having this exact issue during several deployments I decided one sunny Saturday afternoon while the kids were playing in the garden to blow the dust of my Visual Studio installation and create a simple tool that would allow me to type the required information without worrying about the syntax. A couple of hours later UpdateBootImageGUI was born.

That first version (v0.1) was no more than a form with labels and text boxes, but it allowed me to create the command line without needing to remember which switches were needed.

update boot image GUI

This worked well and removed all of the syntax issues I typically encountered in the past when creating that very long command line, but I still felt I had too much information to enter manually. I knew that much of the needed information resided within the SMSProvider.  For my next version I wanted to create a utility that would minimise the amount of information I would need to type by leveraging the existing information buried away in the SMSProvider itself. This second effort resulted in the version (v1.0.0.2 as of this writing) you can download and use today.

free tools 1

 

The following procedure is a simple walk through on how to use UpdateBootImageGUI.

  1. Launch the base executable PS.UpdateBootImageGUI.exe
  2. Enter the name of the SMSProvider Server, press the <tab> or <enter> key. This will connect to the SMSProvider and list all site codes.

UBI

  1. Select a Site Code from the list and press <tab>. This will populate the Boot Image ID selection box. Select the correct PXE Everywhere boot image from the list.

UBI1

  1. The Wim File Path, the Distribution Points and Management Points lists will be populated. Select the relevant Distribution Point/s and Management Point/s.

UBI2

  1. Set the desired Certificate Expiry date using the date selector. The default is 1 year.

UBI3

  1. All the required fields have now been selected but you are able to enter information into the Optional fields as desired.

UBI4

  1. Finally click <Execute> at the bottom of the form to create the command line and add to clipboard. The following screen is displayed to show exactly what was captured:

UBI5

 

 

  1. Paste the text copied to the clipboard into an administrator command prompt on a device with the PXE Everywhere components installed to launch the UpdateBootImage.exe tool to add the additional configuration into the selected boot image.

Note: if PXE Everywhere is not installed in the default location you can update the EXEPath value in 1E.PS.UpdateBootImageGUI.exe.config file to the actual install location.

I sincerely hope you find this tool as useful as we here at 1E do. It dramatically simplifies this portion of the OS Deployment process used by PXE Everywhere.

Further Reading

For further information on 1E Nomad, and its world class OS Deployment capabilities, visit the Nomad blogs hosted on our 1E Blog site where many detailed Nomad and OS Deployment articles exist.

Adrian Todd | Principal Consultants

You can follow 1E and wider-industry news and events via FacebookGoogle+LinkedIn, myITforum, and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left.

THE ‘1E FREE TOOLS’ SOFTWARE IS PROVIDED “AS IS” AND 1E DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
BY DOWNLOADING THIS SOFTWARE 1E GRANT YOU A LICENCE TO USE THE SOFTWARE, YOU MAY NOT REVERSE ENGINEER, DECOMPILE, DISTRIBUTE, RESELL OR COPY ANY PART OF THE SOFTWARE WITHOUT THE SPECIFIC WRITTEN PERMISSION OF 1E.
1E WILL NOT BE LIABLE FOR PROVIDING SUPPORT, NOR DOES IT WARRANT THAT THE SOFTWARE WILL BE UPDATED, FIXED OR PATCHED. HOWEVER SHOULD YOU HAVE ANY QUERIES OR COMMENTS REGARDING THE SOFTWARE OR THE TERMS OF USE PLEASE EMAIL INFO@1E.COM

No File Sharing – Nomad’s Connectionless Mode is a Life-saver

$
0
0

1E NomadIn Windows, the ‘Server’ service enables support for file and print services sharing over the network. When requesting a shared resource on a machine, the Server service responds and routes the resource to the requesting client. The same process is applied when making a remote request. The process is as follows:

  1. The client makes a request to the remote server service, requesting access to a file from server file system.
  2. The request is received by the network card (NIC) driver, and this is then forwarded to the appropriate local file system driver.
  3. The file system driver calls the disk subsystem driver to read the file
  4. The disk subsystem driver returns the file contents to the file system driver, which passes it back to the NIC driver.
  5. The NIC driver then forwards it on over the network to the requesting client.

Let’s suppose in an environment the file & print services sharing over the network has been disabled, i.e. the ‘Server’ service [LanmanServer] has been disabled. The previous process flow doesn’t work as the remote server request is rejected.

As mentioned, historically Nomad communications have required that File and Print services must be enabled in order to be able to create the Nomad shares. Since version 4 Nomad can be configured to bypass File and Print services and use a connectionless transfer. Nomad clients support copying from Master Machine even when there is no file sharing. That’s ‘Connectionless Peer-to-peer’ (P2P) copying. The standard Nomad share (NomadShr) cannot be created in such a scenario but peers will still be able to copy the content from the master machine. It is very easy to enable connectionless P2P. You will need to set the Nomad P2PEnabled registry setting (described below).

The P2PEnabled registry setting controls the peer-to-peer communications used by Nomad. To enable connectionless P2P you will need to set P2PEnabled setting to 0x0006.

The following table shows the supported values:

Bit Hex Decimal Description
0 0x0001 1 P2P enabled. This is the default value and should remain set to allow Nomad to work correctly.
1 0x0002 2 Enable connectionless P2P server.
2 0x0004 4 Enable connectionless P2P client.

The Connectionless P2P copy happens over UDP and is only recommended for environments where File and Print services are disabled. In all other instances the standard shares-based connection method should be used. The connectionless P2P is also used when running Nomad under WinPE. This is because WinPE does not allow file sharing.

When using Nomad in WinPE when there are no pre-cached copies of the content on the local branch and multicast is not enabled, it is recommended that Nomad is configured in connectionless P2P mode. Enabling connectionless P2P mode ensures that multiple WinPE Nomad machines only download their content once and then share it locally.

Saurabh Vij | Technical Lead – QA

You can follow 1E and wider-industry news and events via FacebookGoogle+LinkedIn, myITforum, and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left.

ITIL and 1E Solutions go hand in hand

$
0
0

ITIL and 1E Solutions go hand in handBackground

The Information Technology Infrastructure Library (ITIL) is an Information Technology (IT) management framework that provides practices for Information Technology Services Management (ITSM), IT development, and IT operations.

ITIL gives detailed descriptions of a number of important IT practices and provides comprehensive check-lists, tasks and procedures that any IT organization can tailor to its needs. ITIL is published in a series of books, each of which covers an IT management topic. The names ITIL and IT Infrastructure Library are registered trademarks of the United Kingdom’s Office of Government Commerce.

So where do solutions from 1E fit into this? As outlined above ITIL is very prescriptive, it’s basically a set of processes that are underpinned by tasks and procedures. Due to the technology agnostic approach of ITIL it doesn’t really specify what solutions you could potentially use to achieve the objectives of each process. However if we look closely we can see 1E Solutions can, and do, contribute to the aims and objectives of a number of ITIL processes.

Availability Management

“Availability Management organisations to sustain the IT service-availability to support the business at a justifiable cost. The high-level activities are Realise Availability Requirements, Compile Availability Plan, Monitor Availability, and Monitor Maintenance Obligations.”

Nomad:

Nomad integrates with Microsoft Configuration Manager and ensures software packages and updates are deployed in a robust and reliable manner with minimal impact on any underlying services. Having your systems patched and updated is critical to ensuring they meet some of the objectives of the Availability Management Process namely, Reliability, Maintainability and Serviceability. The really clever thing is due to its efficient use of only the available bandwidth it doesn’t have any impact on any other network connected services or programs already running on the client machines.

1E Agent:

The Wakeup component of the 1E Agent ensures that client machines can be woken and are available to be patched or ready to conduct day to day business. You can schedule and stagger the machines being woken up so your Network availability is not affected by all your machines joining the network at the same time thus helping keep Security Authorities and Directory Services free to deal with other requests.

Capacity Management

“Capacity Management supports the optimum and cost-effective provision of IT services by helping organisations match their IT resources to business demands”

Nomad:

Nomad fulfils the cost-effective part of the statement above by deploying Nomad you can eliminate the need to have lots of Server Class Hardware that is usually required to store software packages and updates for download on remote sites.

1E Agent:

The NightWatchman aspect of the of the 1E Agent helps with capacity management by ensuring that systems are gracefully hibernated, slept or fully shut-down thus saving power both in terms of fiscal and carbon. As clients are no longer using Server and Network resources these resources will be able to carry out their day to day operational tasks like batch processing or system or data backups. As these tasks have more capacity available to them they should complete more quickly thus eliminating the risk of Tasks running in the next business day.

Software Asset Management

SAM is defined as “…all of the infrastructure and processes necessary for the effective management, control and protection of the software assets…throughout all stages of their lifecycle.”

AppClarity:

AppClarity leverages Microsoft Configuration Manager to instantly obtain an accurate picture of all deployed applications and their usage. It enables you to gracefully reclaim those unused licenses so you can reallocate or reduce Audit liability.

Shopping:

Through the use User and Computer Categories you can effectively control what Users have access to in Shopping. This ability to control what users are entitled to order and therefore download and install fulfils the aspect of controlling and protecting your Software Assets from misappropriation of Software to the wrong users.

Financial Management for IT Service

“The three core fundamental tenets of FMITS are Accounting, Budgeting and finally charging (which is optional)”

AppClarity:

AppClarity will enable you to fully account for all the Software that is deployed and in use and also not in use in your environment, enabling you to have informed input into your forecasting and budgeting.

1E Agent:

Due to the many reports available in the NightWatchman Management Centre report console you can accurately account for the monetary cost of power consumed for each individual computer. This information will contribute to you fully understanding the true operational cost of each computer and the whole estate.

Shopping:

Shopping enables users to Self Service their own Software and Services, however you can apply approval chains and charges for each Software application and Service. You can even rent out licenses. The reporting gives you the ability to understand what users and departments have shopped and deployed. This can help with accounting and forecasting for future Budgets.

Service Desk / Request Fulfilment

Tasks include handling incidents and requests, and providing an interface for other ITSM processes

Shopping:

Shopping gives users the ability to order software, hardware and services from one central point. Your Service Desk will be able to take these orders and action them accordingly. This is giving your Users more choice in the way they wish to interact with IT but in a controlled and accountable manner. Also the open integration of Shopping API’s enables you to let your native Service Desk interact act with the Shopping workflow.

Simon Woods | Technical Advocacy Manager

You can follow 1E and wider-industry news and events via FacebookGoogle+LinkedIn, myITforum, and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left.

How big does the ConfigMgr client cache need to be to accommodate Nomad?

$
0
0

How big does the ConfigMgr client cache need to be to accommodate Nomad?When planning a ConfigMgr and Nomad implementation or when integrating Nomad into an existing ConfigMgr environment, I’m often asked “How big does the ConfigMgr client cache need to be to accommodate Nomad?” Or, “Do we need to change the ConfigMgr client’s cache size to accommodate Nomad?”

The short answer is, “It doesn’t matter.”

Now, when I say “It doesn’t matter”, I don’t mean that the size of the ConfigMgr client cache doesn’t matter at all. On the contrary, it’s a critically important design decision. If it’s too small, software distribution success may suffer. If it’s too big, you risk the ConfigMgr client’s cache growing to completely fill the drive’s remaining space. Either way, an improperly sized client cache size will lead to operational headaches… Nomad or not. I say it doesn’t matter, because when determining the proper ConfigMgr cache size for your environment you don’t need to factor in, or accommodate for Nomad.

Many incorrectly presume that a larger, or oversized, ConfigMgr client cache allows Nomad to operate more efficiently because the more content that remains in ConfigMgr client’s cache means more content is available to Nomad for local redistribution. The fact is, however; an oversized ConfigMgr client cache doesn’t benefit Nomad. The best thing you can do to ensure Nomad operates optimally is to ensure that ConfigMgr software distribution works reliably by selecting a properly sized ConfigMgr client cache.

Before I share two factors that should determine the proper cache size for your ConfigMgr clients, let’s review some fact about how the ConfigMgr cache operates and its interaction with Nomad.

  • The ConfigMgr client and Nomad each independently maintain and manage their respective cache directories.
    • The ConfigMgr client’s default cache size is a static size of 5 GB.
    • Nomad’s default cache size is more dynamic and based on the currently available disk space. (Nomad’s cache never grow such that less than 10% of total disk space remains available.)
  • Files downloaded by Nomad are hard-linked to the ConfigMgr cache directory. (Hard-linking allows the cached content to appear in both locations without doubling the required storage.)
  • Content downloaded directly by the ConfigMgr client (outside of Nomad) is not hard-linked to the Nomad cache.
  • When Nomad determines some of its content must be removed to clear space for incoming content (Automatic Cache Cleaning), Nomad also removes the selected content from the ConfigMgr cache. This is also the case when content is removed using the Nomad CacheCleaner.exe utility. Below is a snippet of the NomadBranch.log deleting version 2 of package XYZ0001D from both the Nomad and ConfigMgr cache.

 

PrepareDeletion XYZ0001D(2)                                   CacheManager

ConfigMgr:DeleteCacheContent XYZ0001D(2)                       CacheManager

Nomad:DeleteCacheContent XYZ0001D(2)                                 CacheManager

Deleting C:\ProgramData\1E\NomadBranch\XYZ0001D_Cache_deleting(2)    CacheManager

 

  • However, when the ConfigMgr client removes content from its cache, even when it is hard-linked to the Nomad cache, it remains in the Nomad cache (and is still available from redistribution to its peers on the subnet via Nomad). Yes, you could call this is a one-way street.
  • Before the ConfigMgr client ever invokes Nomad to download content, the ConfigMgr client checks the available ConfigMgr cache space. If additional space is required for the download, the ConfigMgr cache begins deleting packages from its cache to make space. (Again, any content the ConfigMgr client removes still remains in the Nomad cache.)
  • Any content placed in the ConfigMgr cache, has a default “tombstone age” of 24 hours. This means that ConfigMgr will never remove this newly/recently downloaded content until the tombstone age has passed, regardless of circumstance.
  • When the ConfigMgr cache cannot create sufficient space for the incoming content, the client returns an error to ConfigMgr, never invoking Nomad.

Based on these facts, we can see that a reasonable and properly-sized ConfigMgr client cache setting allows Nomad the space it needs.

Most likely, the default ConfigMgr Cache size of 5 GB is going to be too small in your environment, but exactly how big should it be? I recommend you size the ConfigMgr client cache to accommodate the larger of following two items:

  1. The largest single application, package, or operating system image you’ll want to deploy in your environment. (Now and in the foreseeable future).
  2. The amount of content you expect to be deployed to a system in a 24 hour period (refer to the default “tombstone age”, above).

Identify the size of each, select the larger value and add a reasonable “buffer”.  For most of the enterprises I’ve worked with, we found the optimal cache size to be somewhere between 15-25 GB.

Duane Gardiner | Principal Consultant

You can follow 1E and wider-industry news and events via FacebookGoogle+LinkedIn, myITforum, and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left.


OS Deployment Made Easy: Serve Yourself!

$
0
0

Background

Any organization that goes through the process of developing and deploying a new operating system knows a simple truth: coordinating with the end users around scheduling the OS deployment is one of the most challenging tasks in the entire project plan. There never seems to be a “good” time to actually fire off the deployment you spent so much time and effort to get “just right”.  There are of course a boatload of reasons why this is true, but fundamentally it comes down to people. For every computer you need to upgrade, there is a human being on the other side of the system. Finding a convenient time to get the upgrade done is as varied as there are people involved. After all, there are always those who may be working late, or coming into the office on the weekend to get caught up, and so forth. So what’s the solution? Simple: delegate! Put all the moving parts in place and let the end user select the day and time that meets their schedule (within admin limitations of course)!

This seemingly simple concept is not as easy as it may appear. That’s one reason why this capability is a key feature of our solution Shopping, the enterprise app store. We refer to this capability as OS Deployment Self-Service. Naturally, every Administrator reading this is already starting to cringe as they begin fast-forward thinking about all the issues this concept may cause them, making their lives even more difficult! This article is intended to ease those concerns, and provide a simple step-by-step overview of how simple it is to set up the process, and apply solid controls around who can actually use the option and when. Finally we will walk through the end user experience within Shopping to illustrate the simplicity of that piece of the puzzle.

Admin Set Up

We start the process within the Shopping Admin console, in the Applications node, where a right-click presents the New OS Deployment option. This launches the New OS Deployment Wizard where we will create the basic Shopping object (note, you can click on any image to expand it).

01

02

The usual descriptive text goes in first. The information here, including the Description text, is ultimately displayed to the Shopper in the UI.

03

Next the Administrator selects the desired Microsoft System Center Configuration Manager (ConfigMgr) collection being used to control the OSD actions. This is the location where the shopper’s computer is placed by Shopping’s integration with ConfigMgr. We further select the Deployment that is targeted at this particular collection. This is the actual Task Sequence used to install the image.

04

Next the Administrator selects the AD User group (or groups) that will be allowed to see the Windows Migration option in their Shopping UI. This membership is modified over time as more and more groups are added, as the deployment is gradually rolled out. This provides a high degree of control and a phased deployment strategy.

05

We will select the people in our eastern regional office as our starting point. Add more as appropriate over time.

06

Note that further access control can be applied to computers as well as people. For example if we had an AD Computer group of x64 machines with 8gb RAM, we could also add that machine group here. In this way, only those Eastern Region users with a qualifying computer will see the Windows Migration option.

07

08

Et Voila! The basics are now complete.

09

Now that the basics are complete, there are several more actions that further control the process that this deployment will observe and also aid the shopper. We start via the Properties node of the newly created object.

10

The [General] and [ConfigMgr Collection] tabs are of no interest here. They still contain the options selected earlier in the Wizard. We will, however, do some additional fine tuning in the remaining tabs.

11

The first step of course is a double check that we are indeed targeting only the Eastern Regional Office users. We don’t want any surprises later on!

12

Next the Administrator provides a simple text list of those applications that are included in the standard image. This will be presented to the shopper later on to help them to understand what they get, and perhaps more importantly, what they do not get when the new image is applied.

13

Lastly we come to the scheduling magic. There is a lot of information and options provided here. It is prudent to take the time to make sure everything is set as desired.

Maximum Deployments Per Day

This limits how many deployments can be scheduled in a given day (not enabled here).  Once the set limit is reached on a given day, that date is no longer available. The shopper would then of necessity need to pick a different date. This is a great way to limit how busy the Help Desk might be fielding calls that may result from any questions that may arise.

Enable Scheduling Restriction

This setting allows the administrator to select the starting and ending dates during which the self-service option to launch the Windows Migration wizard is displayed to the shoppers

Enable Time of Day Scheduling Restrictions

Here the Administrator may block out portions of the 24 hour day wherein no deployments may be scheduled. Perhaps there are other network activities in progress during these times, for example

Exclude the Following Days

The Scheduling Restrictions option above selects a contiguous period of time over a date range. Here individual days may be further excluded within that range, such as weekends, every Thursday, or perhaps a holiday that falls within that date range, so an explicit date or dates may also need to be excluded. Any and all of these options are available here.

14

The entire process is now fully complete and configured. We are now set to allow the Eastern Regional Office staff to undertake the installation of Windows 8, weekdays only, between 1 and 23 March, except 0400-0900 daily.

15

Lastly, as yet another safeguard, until we hit the “Master Switch” to enable the entire process to those Eastern Regional Office employees, nothing will happen. Note well the existence of the Big Red Button feature immediately below this option to Disable Application.

16

We are now totally prepared to commence the phased availability and self-service deployment of our Windows 8 deployment to our Eastern Regional Office.

End User Experience

As discussed in the preceding section, only those people who are authorized (or invited, if you will) to use this self-service option will see it displayed in their Shopping interface. Furthermore they will only see it presented during the date range that the administrator allows it. Assuming that all of these criteria are met, our shopper can now begin to schedule the day and time (within the allowed limitations described earlier) they desire that meets their schedule. Let’s walk through the process of completing this simple 5-step wizard.

It’s SHOW TIME!

In the following sequence, we see our shopper, named Administrator in this example, shopping on the machine named DEV26-CM01 (this is how we can further restrict the display if the machine group option were also selected earlier in the setup process), seeing the Windows Migration option displayed in the scrolling banner in the Shopping UI. This is displayed at this point in time because the user is now considered “authorized” to schedule the OS deployment based on the criteria configured in the Shopping Admin setup process. Now that we are “in the zone”, let’s [Launch] the wizard and get on with scheduling the deployment.

17

We now see the greeting screen to begin the process. Right off the bat the shopper is reassured that this is not going to be a long and drawn out process as the entire sequence of actions to follow are clearly displayed in an intuitive and simple fashion.

18

In our example, the Administrator is offering two distinct operating system choices to the shopper. Generally there would likely only be a single choice here. Scenarios where multiple options could be useful would be in an international organization where several different language options for the same OS could be offered. Here, the shopper is opting for the Win8 image we described in detail earlier.

19

Now things are getting interesting! This screen displays two distinct panels of information. On the left is the fixed text created by the Shopping Administrator in our set-up section. It simply lists what is included in the base image and is informational only. The right pane, on the other hand, is where things get interesting! We are making the assumption that Shopping has been in this environment for some time, and that our shopper has shopped for a number of applications in the past. Shopping knows what our shopper has installed in previously and simply presents that list of existing applications. We are essentially asking “You have these applications already installed on your computer. Would you like to re-install them once your new image is in place?” Now, the astute reader will also see something interesting in this list. Looking at Reader X under the Current Application list, you see Reader X1 in the Application Post Migration list. The same applies for Project Professional and OpenProj. This is hinting at the integration between Shopping and our AppClarity product which understands usage of all applications on all machines in the estate. The integration of these two products allows the Shopping Administrator to set mapping rules that allow dynamic decision making during the imaging process. The specifics of that process are out of scope for this article. In these examples we are showing the mapping between the existing product(s) and what will actually be delivered into the new image. Not shown in this graphic, the Rating column would show actual usage of the existing application using a 5-star ranking model. This gives the shopper a visual clue as to how often they actually used their existing applications. Maybe it doesn’t make sense to reinstall something they are not even using. After all, if they should need it later on, it’s easy enough to go shop for it again later on!

20

Now that the decisions around what to reinstall are made, it’s time to decide when this imaging task should actually execute. In this first part of the Schedule screen the shopper sees the available dates that are allowed (determined in the set-up stage above). Dates not allowed are gray. Here we will select a day and time to start the process.

21

22

Now that we have determined the “what” and the “when”, the wizard asks for confirmation that the user has taken steps to protect their personal data, and we are now done!

23

24

Once the designated date/time arrives, the image kicks off. Once completed, the selected apps are reinstalled on the new image, and life is good! There is one caution that needs to be stated here, however. If the shopper forgets to leave the computer powered on at the designated date/time, and you are not using a wake-on-LAN tool in concert with ConfigfMgr, the image will launch shortly after the machine is next powered on. Of course if you had our 1E WakeUp feature (part of our NightWatchman power management solution), then this is not a problem as 1E WakeUp will automatically power on the machine at the designated date/time!

WAIT! I changed my mind!!

Invariably circumstances may change for the shopper, resulting in a need to modify the schedule previously created. Note in the following image that the original Windows Migration display has changed. We now have some different options available. Here the shopper decides to change the previously selected date and time for the image deployment.

25

The shopper simply repeats the earlier steps and makes the desired changes, perhaps selecting a different day, or time from that selected originally;

26

When is that new image happening again?

Why not add a reminder to your calendar to give you a gentle nudge, the day before for example, to remind you to leave the computer turned on so the image will be able to execute on schedule?

27

WAIT!!! I changed my mind!!!!!

Simple! Simply revisit the still-present banner and kill the entire process. Remember, the Windows Migration banner will remain visible for the entire period authorized for this shopper, or until the machine is re-imaged.

28

Summary

In this article we reviewed the process of selecting a task sequence that deploys a fairly bare bones image in the Shopping Administration node, determining which machines can initiate the deployment, when they can (and equally important, when they cannot) do so, and making that option available to those authorized shopping users.

We then showed the simple 5-step process the shopper used to schedule their desired deployment date/time, set a reminder in their calendar, optionally change the schedule or ultimately cancel the entire operation.

By selectively delegating the actual installation of the new operating system deployment to select groups of users over a phased time-frame, the OS Deployment Project Team now have a means of managing one of the most challenging aspects of the entire project, while providing the end users with a high degree of satisfaction and sense of control over their own environment. Everybody wins, and the ongoing process of OS deployments become yet another business-as-usual exercise in the IT space.

If you would like to take our Shopping product for a test drive as a shopper or just learn more about the product, visit the Shopping product page or go directly here to plug in the requisite Who-Are-You stuff and off you go.

 

Ed Aldrich | Solutions Engineer

You can follow 1E and wider-industry news and events via FacebookGoogle+LinkedIn, myITforum, and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left.

1E AppClarity software install count, usage and SCCM inventory data

$
0
0

1E AppClarity - Software Asset OptimizationThe most common source of inventory data used with 1E AppClarity, our Software Asset Optimization product, is from Microsoft System Center Config Manager (aka SCCM or ConfigMgr). AppClarity can take data from several sources, for simplicity sake, the specifics herein relate to ConfigMgr 2012.

When dealing with ConfigMgr, the inventory data is used to calculate software install count and usage for desktop environments, which is displayed in AppClarity console. This enables users to make intelligent decisions and save money by reclaiming software from machines which are not actively using it… However, there are a few occasions where the install count and usage information may change slightly between ConfigMgr and AppClarity, the number might lower than expected, after completing end-to-end syncs, using scout.exe and ActiveEfficiency sync.

It’s important to understand that a lower number of software install count and usage following a completed end-to-end sync might not necessarily indicate a problem with AppClarity. In this blog I will demonstrate how to identify possible cause of discrepancies in software install count and usage.

I am going to start by reminding you that in AppClarity “Machine Inactivity” is set to 7 days and “Hardware Inventory Threshold” is set to 8 days be default. What do these values actually mean and how do they impact software install count and usage?

In ConfigMgr Hardware Inventory is enabled in the Default Client settings. By default Hardware Inventory is scheduled to occur every 7 days. Also in ConfigMgr  Heartbeat Discovery (used to maintain the ConfigMgr  client active status in ConfigMgr  database) is enabled and ConfigMgr  client sends a heartbeat every week (7 days).

Based on this information we can conclude that if a ConfigMgr  client has not sent Heartbeat Inventory data for more than a week and/or the ConfigMgr client Hardware Inventory was not processed successfully for more than 8 days, AppClarity scout.exe and ActiveEfficiency syncs will mark the device record as “inactive” in the AppClarity database. When the device record is marked as inactive in the AppClarity database the software install count and usage are not displayed in the AppClarity console.

Please note that AppClarity never deletes devices record information from the AppClarity database for reporting reasons.

Now we have clarified what may cause the software install count and usage information to lower in the AppClarity we can now identify if there is a genuine problem or if this is expected behaviour.

There are valid reasons why ConfigMgr  clients may not sent Heartbeat information and or Hardware Inventory data. This could simply be because the device is shutdown (the user is on vacation) or the device has not connected back to the corporate network, for example if a laptop user is currently travelling with no remote access.

In both scenarios above, we expect the device to update its Heartbeat and Hardware Inventory information in the ConfigMgr  database as soon as the device connect to the corporate network. AppClarity should then update the device record back to “active” at the next end-to-end sync.

What if the device is being used, connected to corporate network but still not showing the software install count and usage information? Don’t panic! I have below few tips to help identify if there is problem in ConfigMgr  client, in AppClarity or in both.

In ConfigMgr  the Reporting Service Point provides a few built in reports which very useful when trying to identify ConfigMgr clients that have not reported (sent Heartbeat data) and not inventoried recently (not has Hardware Inventory processed) in a specific number of days:

  • Computers that have not reported recently (in a specified number of days) – 7 days

This report would help to find a number of ConfigMgr  clients that have not sent Heartbeat data in the last 7 days.

  • Computers not inventoried recently (in a specified number of days) – 8 days

This report would help to find a number of ConfigMgr  clients that have not been inventoried in the last 8 days.

  • Computers that might share the same Configuration Manager unique identifier

This report would help to find a number of ConfigMgr clients that might be duplicated in the ConfigMgr  database.

Any machine returned on the reports can be potentially marked as “inactive” in the AppClarity database. It’s also possible to verify the machine Heartbeat DDR and Hardware Scan status in the ConfigMgr  console, Assets and Compliance, Device node. By clicking on the device name the Summary tab will shows the Client Activity last Heartbeat DDR and Hardware Scan date and time. Please note that the Summary tab information might not be real time.

Finally, it’s possible to right click the device, click on Start and Resource Explorer. Under the device name, expand Hardware and click on Workstation Status. Verify the Last Hardware Scan (Client Local Time) for the last hardware inventory information inserted in the ConfigMgr  database for the device.

If after verifying the information above you suspect the ConfigMgr client for a particular device (s) are not sending Heartbeat and Hardware inventory information to ConfigMgr, there are a few logs that can assist to identify the cause of the issues:

ConfigMgr client logs

InventoryAgent.log

Located under C:\Windows\CCM\Logs folder (default location) the InventoryAgent.log shows information for Hardware Inventory, Software Inventory and Heartbeat. Review this log for possible issues such as WMI corruption when the ConfigMgr  client in processing Heartbeat and Hardware inventory data. It is possible to force a Heartbeat and Hardware inventory cycle in Control Panel, Configuration Manager “Properties and Actions” tab. Select Discovery Data Collection Cycle (Heartbeat) or Hardware Inventory Cycle (Hardware Inventory), click “Run Now” button and then “Ok”. Verify the InventoryAgent.log for possible errors.

ConfigMgr site server logs

There are a couple of logs in the ConfigMgr  site server that could be useful for troubleshooting:

DDM.log

The ddm.log might show possible errors which processing discovery information and Heartbeat for existing ConfigMgr  client.

Dataldr.log

The dataldr.log will show possible issues with Hardware Inventory processing received from ConfigMgr  clients.

You have likely now verified possible issues with Heartbeat and Hardware Inventory in ConfigMgr  and resolved possible issues. What if the issues still remains? i.e. software install count and usage are still lower than expected? What should you do next?

The next steps are to look at potential issues with the ActiveEfficiency scout.exe sync and the ActiveEfficiency sync from AppClarity.

The first step if to check the scout.log, webservice.log and AppClarity.ServiceHost.log. The following file locations are the default location for 1E AppClarity logs and 1E ActiveEfficiency scout.log and webservice.log:

 

C:\ProgramData\1E\ActiveEfficiency\scout.log

The scout.log shows information during the scout sync which connects to the data source, (in this case, the ConfigMgr  database), and insert users and devices into ActiveEfficiency database. The important information in the scout logs is if and when the sync has started and when it has completed. Also the number of users and devices processed:

 

INFO : Configuring Scout. Modes=configmgr

Total Users processed=[1000]…

Total Devices processed=[1000]…

INFO : Scout scanning completed successfully.

 

C:\ProgramData\1E\ActiveEfficiency\webservice.log

The ActiveEfficiency webservice.log shows users and devices information during the scout.exe sync post via the web service via IIS. The important information in the webservice.log is potential duplicated records from the data source (ConfigMgr  database).

 

C:\ProgramData\1E\AppClarity\AppClarity.ServiceHost.log

The AppClarity.ServiceHost.log shows information on when the ActiveEfficiency connector (created in the AppClarity console during the installation) connects to the ActiveEfficiency server, syncs the users and devices information and calculates the software install count and usage.

 

If no potential issues are identified in the AppClarity and ActiveEfficiency logs, close the AppClarity console, restart the AppClarity services (1E AppClarity and 1E AppClarity Catalog Update Service) and reopen the AppClarity console, to refresh the in memory views. This may help to bring back the missing software install count and usage information.

If none of the above has resolved the issues then it’s time to contact the 1E Support team.

Claudio Lopes | Technical Escalation Engineer

You can follow 1E and wider-industry news and events via FacebookGoogle+LinkedIn, myITforum, and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you found this article helpful, please take a moment to share it with your contacts using the social media buttons to the left.

From the Factory to the User

$
0
0

Streamlining Computer Delivery Processes Using 1E Shopping and SCCM

We all come to rely on manufacturers. Every one of us has our preferred computer manufacturer, firewall manufacturer, VPN, and so on. In my case, I love 1E products, I have for 10 plus years. We already had Nomad (best money ever spent), and NightWatchman, and I had been looking at 1E’s Shopping for years. The reason I’m writing this blog post is because of the 1E Professional Services Consultant who did so much for us during this transition – Apurv Gupta. In my many – many times thanking Apurv for his genius, I asked him what I could do for him as thanks. His answer was write a case study on the solution he created for me. So I’m glad to do it, and you can read the full thing on the 1E Resource Center. If you (the reader) get any use out of this, well that’s great, and I will do my best to be clear, and let you know the benefit we get out of every solution, but I write this blog as a heartfelt – thank you! Thanks Apurv!

A couple of years ago, at the environmental firm where I work in the U.S., we decided to create a few “imaging depots” to help us service our users for desktop and laptop delivery. The idea behind these depots was that local staff would build and distribute machines to users across the company.

From a user perspective, the depots were a huge success. When a user got a computer, it came with everything that user needed or requested pre-installed, thanks to a detailed setup process:

  1. The depot tech would use SCCM to image the computers
  2. They would then use a combination of SCCM and manual installations to build the computer for the user
  3. They would get a list of software needed for that user and ensure it was approved by the user’s immediate supervisor and the person holding the purse strings for the division. It was the depot tech’s responsibility to track all of those people and secure written approval
  4. Once the software was installed, the depot tech would manually move the user’s data, often over the WAN from their old machine (if there was one), to the new machine
  5. The depot tech would then pack the computer and ship it and any accessories to the user
  6. The tech would then follow up with the user to make sure everything was ok with the computer

This process represented customer service at its best, but it wasn’t sustainable at all. In fact, from a business perspective, the depots were a disaster. When you’re providing that much customer service using just a few personnel, things are likely to back up, and they did. One of the problems was around storage: machines came from the manufacturer, were unboxed by the depot tech, and stored safely. Since each computer typically came along with more than one monitor, plus accessories, that meant that if 150 computers were ordered, over 300 boxes arrived. Maintaining such a large quantity of equipment at a non-warehouse location wasn’t possible. The depots became time consuming and costly and we needed a way out quickly.

The solution we came up with was this: we would ship the computers directly from the manufacturer to the user, and provide the user with a self-service process to complete the build of their own computer. This process would make the user responsible for ordering their computer through 1E Shopping and setting it up on arrival. Upon connecting the computer to the domain, the user would be able to request the software they need, with all approvals automatically handled and the software is automatically installed on the user’s computer. The final part of the sequence would involve the user running a program to move their data from an existing computer to their new computer. If you’d like more detail on how we did this, you can read the whole – illustrated – story on the 1E website.

Thanks to 1E – and Apurv – we were able to establish this end-to-end automated process without impacting the quality of service experienced by the end user. The business has saved the cost of the imaging depots, but any employee ordering a new computer through Shopping still receives a PC that is fully configured to their needs and can easily switch the data from their old machine to the new one.

Click here to read the full case study.

Gene Acker is an SCCM Architect, Project Manager and Administrator for a large environmental firm in the U.S. He has been working with Microsoft System Management software since early SMS 2003. Gene is also a systems administrator, and an automation specialist. Prior to working in the IT field he spent 15 years in the United States Navy in a technical related field.

You can follow 1E and wider-industry news and events via FacebookGoogle+LinkedIn, myITforum, and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you enjoyed this article, please take a moment to share it with your contacts using the social media buttons to the left or below.

IT Project Delivery, Easy as Ordering a Pizza

$
0
0

Delivering a pizza is easy.  Delivering an IT project is not!

Why are they different?  If you think about the two, aside from the cheese and crust, they aren’t completely different animals.  Both have a recipe to follow, both have a customer, both have success criteria, and we have been doing both for a long time.

Case in point, try Googling “IT Project failure” and then “Pizza Delivery Failure”.  The former will result in hundreds of articles citing reasons/percentages/adages all trying to crack the reasons why, while the latter will show funny YouTube videos of delivery guys falling on the ice.  Pizza delivery is just an assumed success.

If this is the case, why is ordering a pizza so much more successful?

Things that make ordering a pizza easy are actually quite elusive during IT project delivery.  A US pizza chain, Domino’s, does pizza ordering very well.  Here is what Dominos presents to the user when a pizza is ordered online.

001

When you hit the Purchase button after creating your pizza details, you are presented with a screen that gives the customer a great deal of power.  It’s a quick and easy indication of where your pizza is in the making, baking, and delivering process.  Imagine knowing exactly what’s happening and when it’s happening.  As the pizza moves along in its journey from the oven to your mouth, the blue sections turn red and the name of the person responsible for each step is revealed.

What are the benefits of this process to the customer?

  1. Clear start and end with clear success metrics – I know when my order was placed and I know what needs to be completed to achieve the end result.  And I know what my end result is, a pizza!
  2. Easily understood phases – I don’t have how to make the pizza, but I know enough about what each phase represents to feel confident that the pizza is being made correctly.
  3. Progress easy to measure – Each phase has a clear ending and is unique.
  4. Easy to know what is next / No surprises – I see what’s planned for the pizza and it makes sense; delivery won’t suddenly go before baking so there’s no chance I get raw dough delivered to my house.

Now, think about the issues with your last IT project. If you are part of the 30% of IT projects that fail, you’ve ran into these issues and more:

  1. When is done actually done – I know we’ve completed the project, but what do I have now that it’s complete?  Seems like we just did work to do work.
  2. Project schedule confusion – The project manager sends the schedule every week, but it’s 100s of lines long!  I don’t understand what is being worked on and what is upcoming.
  3. Are we there yet? – I see from the schedule we are 17% percent complete on the design phase, but what does 17% mean in real life?
  4. Surprise Surprise – I didn’t think we would find so many issues when deploying to production.

IT Project delivery need not be an art form.  Instead, it should be a reliable, battle tested, predictable process that is used on every project, no matter the technology or customer.

The standard 1E Project Delivery method is built in such a way to combat common IT project shortfalls.  Does this mean all 1E projects are ahead of schedule and under budget?  No, but because the 1E project process was created through years of implementation experience, 1E projects are built to have less risk, more predictability, more visibility, and no surprises.

You’ve heard of the AOR process here. While AOR is the holistic 1E customer engagement model, the Optimize phase itself has a project implementation method born from industry best practices and 1E experience.  1E projects are delivered following this recipe:

002

Look familiar?

Just like ordering your favorite pizza, customers can understand and track project process in distinguishable and intuitive phases.  Upcoming work is predictable.

In the next PMO blog, we’ll dive into the specifics of some of these areas and why they are used to combat common IT delivery issues.

Matt Albert | PMO Practice Manager

You can follow 1E and wider-industry news and events via FacebookGoogle+LinkedIn, myITforum, and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you enjoyed this article, please take a moment to share it with your contacts using the social media buttons to the left or below.

Your IT Project is Doomed

$
0
0

Your IT project is DOOMED.  But don’t worry, you are not alone.

What is it that forces IT projects to the dark side?  The issues aren’t new:

  • Resources are not qualified or do not have the appropriate skill set to execute the project
  • Requirements are not well defined or change throughout the project
  • Schedule is not clear or next steps are not well understood
  • Risks are unknown and unexpected, especially during production deployment
  • Exit strategy is unclear and not agreed

IT doesn’t have to be this way.

Pizza Time

As discussed in our previous PMO blog here, the 1E project delivery methodology is designed to combat these common missteps by following a structured delivery process.

01

Every 1E project is segmented into the following stages:

  • Customer Training
  • Discovery and Requirements Formalization
  • Existing Environment Health Check
  • Solution Design
  • Lab Implementation and Test Plan Execution
  • Production Solution Implementation
  • Pre-Pilot Execution
  • Formal Pilot Roll Out
  • Project Close and Hand Over Activities

By implementing a standard delivery process, both the project team and the customer stakeholders understand where the project is and where it needs to go.

In addition, the 1E project methodology sets itself apart by implementing tools and processes at key project stages.  The following highlights common project issues addressed by this approach:

Resources Not Qualified

02

An unqualified resource can hinder a project as quickly as a product bug.

  1. 1E Project Managers – Can you do that faster? All 1E Project Managers are PMI (PMP) and/or PRINCE certified with over a decade of combined 1E experience, and several decades of enterprise software experience. 1E Professional Services as a whole works on over 100 projects each year, which allows for constant refinement of the delivery process based on lessons learned.
  2. 1E Consultants – Since the days of SMS.  The 1E Consultant team is a global team of certified experts in 1E products & Systems Management with an average of 10+ years industry experience.  During all projects, 1E Consultants perform an intensive transfer of knowledge and best practices to the customer, focused on the unique characteristics of the implementation and environment.  Our consultants have a proven track record of IT cost optimization while increasing efficiency.

Requirements Not Clear; Priorities not agreed

03

Requirements are critical to project success; it’s impossible to know if you succeed if you don’t know how success is defined.

1E has a 3-step process when tackling requirements: Statement of Work, Requirements Definition, and Incremental Deliverable Sign Off:

  1. Statement of Work –  You should be able to read my mind! The Statement of Work is the first artifact of a project that defines both scope and success metrics.  SOWs definitely aren’t new, but creating a detailed SOW isn’t easy.  1E has a detailed and focused SOW Customer Questionnaire that gleans information about both the customer business and technology landscape.  The goal with this interview exercise is to help the customer think holistically about what a project with 1E looks like, including business goals, objectives, and technical expectations.
  2. Requirements – What would you say you do here? In support of the SOW document, it’s critical to specifically define project requirements during the discovery and project kickoff phase. During SOW creation, business goals are defined and agreed, but tech specifics need to be further defined at the start of the project.  Occasionally SOWs are fast tracked through the procurement process and the proper technical attention isn’t given from a customer perspective.  Properly defining requirements is the catch all to ensure expectations are specified, communicated, and agreed.  This also ensures the actual client tech folks working on the project and directly affected, have a chance provide input, add requirements, and identify concerns.
  3. Incremental Deliverables Sign Off – Are we there yet?  Knowing what is needed is important, but knowing when the finish line is reached is equally important.  Incremental sign off on deliverables as the project progresses helps prevent surprises during a formal project close.  Nothing keeps a PM awake at night quite like the statement, “We aren’t done, I never signed off on that.” Incremental sign off allows the entire project team to claim formal and tangible progress throughout the project, not just at the end.

Risks Unknown

04

The biggest enemy of a project schedule and budget is unknown risk.  These unknowns have the potential to delay a project and force it to go over budget, thereby making the PM and project team look bad.

1E projects use two methods in order to uncover these unknowns as early as possible:

  1. Environment Health Check – Open wide and say ahhhhhhhhhhhhh. For every project, a thorough environment health check is performed. Without a health check, existing customer issues could fester only to appear later in the project, causing delays and surprises.  The 1E health check covers a wide range of technology, from CM and networking checks, to security and package practices.  By performing a health check early on, issues are mitigated and we avoid building a new technology on top of an existing problem.
  2. Pre-Pilot – It’s the pilot behind the pilot. Most IT projects, once OK’d in the Lab, are rushed to a formal production pilot.  That is, emailing a large set of users and having them try this new technology.  “Don’t worry, this passed every test case with flying colors in the lab!  So the pilot will be no big deal. ”  Then, as issues crop up (as production is always different than the lab), the project comes to a grinding halt.  All the hard work is immediately undone, and exec stakeholders start having second thoughts.

There will ALWAYS (always, always) be issues during a pilot.  In fact, that’s what you want as a project, to uncover issues during the pilot rather than full deployment.

However, lots of issues during a large pilot can be just as damning as issues during full deployment.

1E developed the concept of a pre-pilot.  It’s a pilot in the production environment in order to uncover production environment issues, but on a much smaller and exclusive scale.  Think of it as a member’s only or VIP sale.  Pre-pilot targets a handful of users, preferably friendly IT users, that will provide helpful use cases and feedback, but will understand if issues arise.  These users are problem solvers themselves, and will work with the project team to fix issues rather than complaining to the powers that they were inconvenienced.  Typical pre-pilot users are IT folks themselves.  Be careful though, when getting IT users to take part of the pre-pilot; often their systems are customized, which can nullify a typical user production test.

Exit Strategy

05

  1. Training – You must unlearn what you have learned.  With Training from 1E, the operational teams will get a thorough understanding of your own implementation and discover how the value of 1E solutions can be maximized year after year.  Shouldn’t training be at the start of the project?  1E has found actually that training the project team at the start of the project on new technology leads to better knowledge transfer, discovery sessions, and design.  If the customer project team knows something about the 1E products even before the project begins, it goes a long way to ensuring a successful project hand over.

In the end, leverage these key tools at key project phases can move your IT project away from the abyss and into the realm of an on time, under budget, business success!

Matt Albert | PMO Practice Manager

You can follow 1E and wider-industry news and events via FacebookGoogle+LinkedIn, myITforum, and Twitter, or by signing up to our monthly content newsletter, V1Ewpoint.

If you enjoyed this article, please take a moment to share it with your contacts using the social media buttons to the left or below.

Viewing all 178 articles
Browse latest View live