Windows Network Load Balancing (WNLB) is the most common software load balancer used for Exchange servers.
Buy why not Windows NLB?
· Issues with WNLB
· Service awareness
· Switch/Port flooding
· NAT/Source IP pool/affinity
· Scalability over 8 nodes
· Add/remove single node causes all clients to reconnect
Issues with WNLB: WNLB can’t be used on Exchange servers where mailbox DAGs are also being used because WNLB is incompatible with Windows failover clustering.
Service Awareness: WNLB doesn’t detect service outages. WNLB only detects server outages by IP address. NLB is “server aware”, but not service aware. NLB is not aware of service failures and will continue to send clients to the server. Manual intervention is required to remove the Client Access server experiencing the outage from the load balancing pool.
Switch/Port Flooding: Using WNLB can result in port flooding. Network Load Balancing induces switch flooding by design, so that packets sent to the cluster’s virtual IP address go to all the cluster hosts. Switch flooding is part of the Network Load Balancing strategy of obtaining the best throughput for any specific load of client requests.
Persistence/Affinity: Because WNLB only performs client affinity using the source IP address, NLB only provides IP-based persistence. Session persistence (Session Affinity) is the ability of the load-balancer to make sure a given Client always gets to the same Server, even across multiple connections.
Scalability over 8 nodes: Due to performance issues, Microsoft doesn’t recommend putting more than eight Client Access servers in an array that’s load balanced by WNLB.
Add/remove single node causes all clients to reconnect: Any time you add or remove a server from NLB it will force all Outlook clients to reconnect. While this should rarely happen, it is another example that NLB does not provide fault-tolerance to your Exchange environment.
Exchange 2013 has been consolidated into two main roles, these are:
· Mailbox Server role
· Client Access Server role
Mailbox Server role
The Mailbox server in now includes all the traditional server components: the Client Access protocols, Transport service, Mailbox databases, and Unified Messaging. The Mailbox server handles all activity for the active mailboxes on that server.
Client Access Server role
The Client Access server provides authentication, limited redirection, and proxy services, also offers all the usual client access protocols: HTTP, POP and IMAP, and SMTP
Load balancing has gone through some magnificent improvements in Exchange 2013.
Only Layer-4 load balancing: Exchange 2013 CAS is now a thin and stateless server all sessions to the CAS servers are stateless and therefore persistence/affinity is no longer required on the load balancer. There is never anything queued or stored on the Client Access server. OWA is rendered, on the Mailbox that’s hosting that user’s mailbox database. Therefore if a client hits different Client Access server there’s no performance degradation as the session rendering OWA for that user is already up and running. Now that session affinity isn’t required, more flexibility, choice, and simplicity with respect to the load balancing architecture are available.
HTTPS-only access: Outlook clients no longer use RPC to access their mailbox. This is now handled only by RPC over HTTPS (Outlook Anywhere). Native RPC is only used for server to sever communication. POP3 and IMAP4 are also supported. This means that there is only one protocol to consider, which is HTTP and it is a great protocol because its failure states are well known, and clients typically respond in a uniform way.
Forms-based authentication: Exchange 2013 improved the way HTTP cookies are handled. The authentication cookie is provided to the user after logon which is encrypted using the Client Access Server’s SSL certificate. This enables a logged in user to resume that session on a different Client Access server without re-authentication; assuming servers share the same SSL certificate, and therefore are able to decrypt the authentication cookie the client presents.
1. Log on to your SQL Server with an account that has full access; ideally, you should use the same account that you used for your SharePoint installation (SQL_Service).
2. Open the SQL Server Management Studio interface and locate the SQL Server instance that contains your Central Administration database. The database name will be something similar to SharePoint_AdminContent_<GUID>, as shown in Figure. Right-click the database name and choose the Rename command from the shortcut menu to enter edit mode. Then press Ctrl+C to copy the existing name of the database for later use. Click anywhere outside of the database name to exit edit mode.
Be sure not to change the name at this point—it will be done at a later time.
3. While still in SQL Server Management Studio, back up the existing SharePoint_AdminContent_<GUID> database by right-clicking the name of the database and then selecting the Tasks command. Select Back Up to open the Back Up Database dialog box. Use all of the default settings for the backup and then click OK.
4. When you have successfully backed up the database, restore the information from the backup that you just performed to a new database having a user-friendly database name such as CentralAdmin_Content_DB. Perform the restore by right-clicking the existing database name again and selecting the Tasks command from the shortcut menu. Select Restore and then Database to open the Restore Database dialog box shown in Figure. In the To Database section, type in the new database name and then click OK at the bottom of the dialog box.
but as we are used to, even if you follow the steps it will give you an error when trying to restore the DB…so follow here
In the Restore Database window, navigate to the Options tab and at the Restore the database file as:
Choose a different name for the database for example I rename it from sharePoint_AdminContent_536468c9-7c19-4f20-b622-24c94f72c2cd.mdf to harePoint_AdminContent.mdf, same goes for the logs db.
5. Open Central Administration. Under Application Management, click Manage Content Databases.
a. Select the SharePoint Central Administration v4 Web application using the Web application drop-down list.
b. Click the old database name, SharePoint_AdminContent_<GUID>.
c. Use the Database status drop-down option to change the status from Ready to Offline.
d. Do not select the option to remove the content database.
e. Click OK.
6. Log on to the server using the account that was used to provision the database. Usually this is the service user account that you configured SharePoint with when you provisioned the content databases during the installation of SharePoint 2010.
7. After opening the command prompt, perform the following steps.
a. Type cd C:\Program Files\Common Files\Microsoft Shared\Web server extensions\14\BIN\ to change the directory to the SharePoint 2010 root so you can run the STSADM commands.
b. Delete the original Central Administration database, the one with the GUID that you copied earlier, using the following command (be sure to specify the
UrlOfYourCentralAdministration and NamedInstanceOfYourSqlServer for your SharePoint installation names).
stsadm -o deletecontentdb -url http://UrlOfYourCentralAdministration:portnumber -databasename SharePoint_AdminContent_<GUID> -databaseserver NamedInstanceOfYourSqlServer or just the name of the SQL Server if there is no instance name for the SQL Server.
c. Associate the newly created database with your Central Administration using the following STSADM command (again be sure to specify the
UrlOfCentralYourAdministration and NamedInstanceOfYourSqlServer for your SharePoint installation names).
stsadm -o addcontentdb -url http:// UrlOfYourCentralAdministration:portnumber -databasename SharePoint_AdminContent_DB -databaseserver NamedInstanceOfYourSqlServer
8. Return to Central Administration. Under Application Management, click Manage Content Databases and refresh the page to verify that your Central Administration database reflects the new database name as shown in Figure.
If you attempt to delete the original SharePoint_AdminContent_<GUID> database and receive an error indicating there are existing connections to the database,
you probably didn’t disassociate the database from SharePoint. Repeat the process from the beginning. it will give you an error whatever you do, so mark kill connections to this database when deleting.
1. Memory Tuning
Before you set the amount of memory for SQL Server, you need to determine the appropriate memory setting. You do this by determining the total available physical memory and then subtracting the memory required for the operating system, any other instances of SQL Server, and other system uses, if the computer is not wholly dedicated to SQL Server.
There are two principal methods for setting the SQL Server memory options manually.
■ Set the min server memory and max server memory options to the same value, which corresponds to a fixed amount of memory allocated to the SQL Server buffer pool after the value is reached.
■ Set the min server memory and max server memory options to create a memory range. This is useful when system or database administrators configure an instance of SQL Server that is running in conjunction with other applications on the same computer like Exchange or any other line of business applications.
2. Databases and Logs location
To make sure your databases are created on a RAID array or that the data files and transaction log files are located on different physical drives, you can modify the SQL Server Database Default Locations settings as shown in Figure
This setting does not affect existing databases—it affects only new databases created after the change to the Database Default Locations settings was made. To move existing databases, you have to detach, copy, and reattach the databases in SQL Server.
Model Database Settings
All new SharePoint databases will inherit most Model database properties. For instance, if you create a new database from within Central Administration, the new database will inherit the following properties from the Model database.
■ Initial Size of primary data file
■ Initial Size of transaction log
■ Recovery model
So we’ll have to modify the INITIAL SIZE OF MODEL DATABASE
I set it to start with 50 MB and autogrowth of 100 MB, logs as a 25 % of the database itself.
3. Pre-Creating Your Content Databases
The advantage of pre-creating databases is that all the Model database settings are inherited, including the Autogrowth setting that is not inherited when you create your content databases from within SharePoint. However, a disadvantage is that the default collation setting of the Model database is SQL_Latin1_General_CP1_CI_AS, which is not compatible with the required SharePoint database collation type. Therefore, your DBA will have to be sure that the pre-created content databases use the appropriate database collation type of Latin1_General_CI_AS_KS_WS, as shown in Figure
and that’s it for tuning SQL
What Is Putability?
Putability is the quality of putting content into an information management and retrieval system with the correct metadata. It is the degree to which you put quality information into your information management system and represents the first step in an overall information process.
When it comes to putability, you must pay attention to two truths—ignore these at your own peril. First, what goes in must come out. This is the old “garbage in, garbage out” adage about computing that has been used since the 1960s, and it was originally developed to call attention to the fact that computers will unquestioningly process the most nonsensical of input data (garbage in) and produce equally nonsensical output (garbage out). It is amazing that otherwise very smart people will think they can load volumes of documents into SharePoint 2010 with little thought to organization and then expect to find individual documents quickly and easily. Chaos does not lead to organization, and organization is the foundation of findability.
The second truth associated with putability is that users will resist taking the time required to put quality information into the system. Managers seem to be especially resistant to having their information workers take time to properly tag their information as it goes into the system. It seems that the vast majority of information retrieval systems users mistakenly believe that as long as the information is indexed, they’ll be able to find it. User resistance is one of the most difficult obstacles to overcome when implementing a robust information organization effort. When asked who is responsible for tagging information, a survey4 elicited the following responses (multiple answers were allowed to this question, hence the total of more than 100 percent).
■ Authors (40%)
■ Records managers (29%)
■ SMEs (25%)
■ Anyone (23%)
■ Don’t know (12%)
■ No one (16%)
This means that in many organizations, users simply don’t know who is responsible for tagging information or are not directly assigned the tagging task to make that information more findable. In the absence of a governance rule that details who is responsible for tagging documents, the result is that anyone (and yet no one) will be able to apply metadata to a document. This is not a recipe for success.
What Is Findability?
Findability is the quality of being locatable or navigable.5 It is the degree to which objects are easy to discover or locate. As with putability, there are some truths to which you should pay attention. First, you can’t use what you can’t find. So it really doesn’t matter whether or not the information exists—if you can’t find it, you can’t use it. The corollary to this is that information that can’t be found is pragmatically worthless. You might have spent millions to develop that information, but if you can’t find it when you need it, it is worthless.
Second, information that is hard to find is hardly used. It stands to reason that users who won’t take the time to put quality information into the system will also not take much time to find the information they need. Often, if users can’t find information quickly and easily, they will simply e-mail someone who they think can find the information for them, or they may simply not include that information in their decision-making processes.
Of course, the more the information is needed by the user, the harder she will work to find it. But in the long run, users won’t put out any more effort than they believe is minimally necessary to do their job.
How Well Is Findability Understood?
When respondents were asked the question “How well is findability understood in your organization?” in the Findability and Market IQ survey conducted by the Association for Information and Image Management (AIIM),6 the following answers were given.
■ It is well understood and addressed. (17%)
■ It is vaguely understood. (31%)
■ Not sure how search and findability are different. (30%)
■ No clear understanding of findability at all. (22%)
This means that over half (52%) of the employees in organizations today that participated in the survey either don’t know what findability is, or they are not able to differentiate findability from search technologies. Many believe that if they have a stand-alone search tool, then findability is being adequately addressed. Nothing could be further from the truth.
Search is too often viewed as an application-specific solution for findability. Search technologies focus on trying to ask the right question and on “matching” keyword queries with content under the assumption that if the right words are input as the query, the right content will be found. What you must understand is that findability is not a technology—it is a way of managing information that is embedded in the organization. Achieving success with the information process relies on well-defined patterns and practices that are consistently applied to the information. Although it is true that search technologies support efforts to find information, search can’t fix the “garbage in, garbage out” problems present in most organizations today. The carelessness with which organizations manage information cannot be resolved by a technology.
The Paradox of Findability as a Corporate Strategy
When asked the degree to which findability is critical to their overall business goals and success, 62 percent of respondents indicated that it is imperative or significant. Only 5 percent felt it had minimal or no impact on business success. Yet 49 percent responded that even though findability is strategically essential, they have no formal plan or set of goals for findability in their organization. Of the other 51 percent who claimed to have a strategy, 26 percent reported that they used an ad hoc strategy—that is, they developed it only as needed—meaning that they had no strategy at all. Hence, 75 percent of organizations surveyed had no findability strategy, even though many believe it is strategically essential.
The Opportunity Cost of a Poor Information Process and Architecture
So, what are the opportunity costs of maintaining a poor information process and architecture? The information on cost studies is not robust, but the data available suggests that organizations are losing a lot of money on this problem. A good study on the costs related to time spent searching for hard-to-find information suggests the following.
■ Typical employees spend an average of 3.5 hours per week trying to find information but not finding it.
■ These employees spend another 3.0 hours recreating information they know exists, but that they cannot find.
This means that the average knowledge worker spends 6.5 hours per week searching for information, not finding it, and then recreating that information so that the worker can move forward in his or her job. At an average salary of $60,000 (US dollars) per year, this “lost” time equates to $9,750 per worker per year. In a company with 1000 employees, this equates to an annual opportunity cost of $9.7 million (US dollars). In addition, that company with 1000 employees will spend $5.7 million per year (US dollars) simply to have users reformat data as it moves between applications, and they will spend another $3.3 million per year dealing with version control issues.
So, what prevents workers from finding information? The AIIM survey found the following reasons.
■ Poor search functionality (71%)
■ Inconsistency in how workers tag/describe data (59%)
■ Lack of adequate tags/descriptors (55%)
■ Information not available electronically (49%)
■ Poor navigation (48%)
■ Don’t know where to look (48%)
■ Constant information change (37%)
■ Can’t access the system that hosts the info (30%)
■ Workers don’t know what they are looking for (22%)
■ Lack the skills to find the information (22%)
Note that the first three reasons users cite are really about the putability side of the information process. So if an organization inconsistently tags and describes information, then the users will experience a poor result set, which is described in this list as “poor search functionality.”
A poor search functionality, from the viewpoint of the user, is really about a result set that is not helpful or is irrelevant. A relevant result set has content items that are useful and helpful to the user. In most cases, a poor result set that is blamed on a poor search technology is really just a mirror of a bad information architecture implementation. In other words, garbage was put into the system, and so the result set tends to be garbage, which results in the end user blaming the technology for a poor search functionality. However, if you have a strong information architecture coupled with users who work together to maintain a set of patterns and practices for how information is managed, then search results will likely be relevant to the user. Again, the real problem in the management of information does not have to do with technology, but rather with the people who interact with the information management and retrieval system—that is, with an organization’s willingness to invest in the staff as well as the software so that, working together (people and software), they can establish a successful, usable information architecture.
So here’s the juice
During SQL deployment, it is a network access account.
SQL Server prompts for this account during SQL Server Setup. This account is used for the following SQL Server services:
• SQL Server (MSSQLSERVER)
• SQL Server Agent (SQLSERVERAGENT)
If you are not using the default instance, these services will have the following names:
• Domain user account.
When installing SharePoint
The user account that is used to run:
• Setup on each server computer
• The SharePoint Products and Technologies Configuration Wizard
• The Psconfig command-line tool
• The Stsadm command-line tool
• Domain user account.
• Member of the Administrators group on each server on which Setup is run.
• SQL Server login on the computer running SQL Server.
Member of the following SQL Server security roles:
• securityadmin fixed server role
• dbcreator fixed server role
• dbowner, db_owner fixed server role
When running Farm Configuration Wizard
This account is also referred to as the database access account, will be used when running the farm configuration wizard as the Service Account
This account is:
• The identity for the application pool that hosts the SharePoint Central Administration Web site.
• The process account for the Windows SharePoint Services Timer service.
• Domain user account.
• Local Administrators group of every server in the farm.
Additional permissions are AUTOMATICALLY granted for this account on Web servers and application servers that are joined to a server farm.
This account is automatically added as a SQL Server login on the computer running SQL Server and added to the following SQL Server security roles:
• dbcreator fixed server role
• securityadmin fixed server role
• db_owner fixed database role for all databases in the server farm