Tuesday, December 23, 2008

Merry Christmas and Happy New Year

I would like to wish Merry Christmas and Happy New Year to all my friends, Students and visitors. I would not have gone this far without your support and I do appreciate it.

I am wishing you and your lovely families all the best, great Xmas and Happy New year, also I wish the New Year will bring good health and high spirits.

Oz ozugurlu MVP (Exchange)


MCSE (M+, S+) MCDST, Security+, Server +, Project+



Monday, December 22, 2008

Offline defrag & Exchange maintenance

Should offline defrag be considered as one of the scheduled maintenance task for Exchange administrators? The short answer is going to be "No". The simple reason behind this, taking Exchange offline will cause "Outage" and if there is no space gain (white space, 30 percent usable space) there is no point of performing offline defragmentation at any cost.

Let me state this up front, those of you who are running enterprise version of Exchange should never perform offline defrag and cause outage, you need to create empty database and move user mailboxes

(In the night& off business hours) onto it and delete the old one contains white space. (Assuming you does have at least, one mail store available to achieve this goal.)

The process behind running offline defrags is that, exchange wont takes existing database and remove the white pages out the database and makes it ready to use. It Instead it copies used pages from old database and creates new database. When copy pages finishes it re-point the logs to the new database and it assigns nee signature to it.

How do we know if we need to perform offline defragmentation, the Exchange server's application logs "1221" will tell you how much white space (unusable)

Here is great article goes deep into 1221


Here is MS team blog goes deep into



Oz ozugurlu MVP (Exchange)


MCSE (M+,S+) MCDST, Security+, Server +,Project+

Blog: http://www.smtp25.blogspot.com/

Sunday, December 21, 2008

Dedicated Active Directory Sites for Exchange 2007

I remember it used to be the best practice for placing the Exchange servers to its own site in Exchange 2003. To protect the DC/GC and dedicate them exchange usage only, and not letting other processes to eat up all DC/GC resources in multimaster replication model, we even lift the "DC locator DNs SRV Records" and give higher priority so that those DC/GC should be used by exchange only.

The problem is hard coding "DSAccess" causes the single point of failure if dedicated DC's are not accessible. Putting Exchange to its own site, will force exchange (DSAccess) to located DC's within its site fist, and it they are no accessible they will go out and located other DC's so that exchange remains happy and functionally.

Placing DC/GC ratio is 1:4, meaning 1 DC/GC per 4 Exchange servers, for Exchange 2007 1 DC/GC for per mailbox server. (Details in the following article)

How do all these apply to Exchange 2007? If you remember the changes in Exchange 2007 structure, "Site base routing" you will make you, predict a close guess and know dedicated DC/GC is not recommended for Exchange 2007.

I always wondered why we could not have more control over DSaccess, such as hard code it for specific server and at the same time being able to say, if these wont response go discover automatically any available GC or even tell it where to go second? Maybe next version of exchange (-:, who knows

Here is excellent article at Exchange Team Blog.


Oz ozugurlu MVP (Exchange)


MCSE (M+,S+) MCDST, Security+, Server +,Project+

Blog: http://www.smtp25.blogspot.com/

Monday, December 15, 2008

Recommended mailbox size and Exchange Databases

Below is classical questions asked at least one time in every consulting place I walked in so far. I do understand from the costumer stand point why to ask these questions to Exchange SME.

From my experience most of the contract I death with did not fallow the MS best practices. The ones did follow, very little work to do, or there was no reason for me to be there.

Here is a great tip for those who wish to implement MS best practice. Use magic way of locating this information "goggle it". Most of the time Google will take you to right article or TechNet page faster than any other method as far as I know. I have also included some basic foundation information which I think is very important for all exchange administrators to know and understand better the entire concept.

Optimizing Database Access

The exchange database arte not to be larger than 50 to 100 GB, according to best practices, explained in below MS article, the link for the original article is also included.

  • For servers supporting large information stores (50 to 100 GB), it is especially important to follow these guidelines
  • Place transaction log files and database files on different disks.
  • Dedicate a high performance spindle to the transaction logs.
  • Use a dedicated partition for the databases. Experience shows that as servers get bigger, the database partition starts to use a lot of I/O. This is especially true for RAID 5 partitions because of the added overhead. As a result, it's a good idea to only put database files on the database partition.
  • Put the MTA database and tracking logs on the system disk (if you don't have a spare spindle), not the database partition.

    The reality behind the size of the database is that the bigger the database is, it gets harder for application to handle. Same goes for I/O, CPU and memory relationship. The factors listed above are to provide the best performance. The RAID also indicating the most redundancy for given configuration. The fact is that, understanding what type of operations any application uses, the key for deciding the RAID level. For instance, if the task or process read intensive (Logs) the RAID level has to be decided in accordance to the read operations. If, the task or process write intensive (Database) same goes for RAID. So keep in mind the fastest and most redundant way will be considering these factors and performing implementation accordingly.

The following sample disk configuration is recommended for typical large servers.

  • Mirror set 1

    System disk. Includes binaries, swap file, MTA database.

  • Mirror set 2

    Transaction files only.

  • RAID 5 partition

    Exchange information store and directory databases only.

What is the recommended mailbox size for per users, what are the some industry best practices out there, people are already using

  • What is the recommended size for Exchange databases?

    50-100 Gig for per database.

  • Offline Exchange Defragmentation and how long will it take.

    It takes about one hour to defragment 5 to 10 GB

  • Why to do Exchange offline defragmentation?

    If there is enough whitespace (Unusable space) on the exchange databases, the administrator might consider performing offline defragmentation. Remember offline means "Outage" so it has to be planned with business owners and end users.

    There is no need to perform offline defrag, if you are running enterprise version of exchange. Simply create new database move the users into this newly created database and delete the old database.

  • Is your mailbox is big and causing performance issues to the exchange server?

    It's not the size of the mailbox that impacts performance - it is the number of items in the folder or folders that are being accessed on the server. Read more

Ps: Please remember these number are estimate, the actual CPU,HD, and month of memory in the system will effect these numbers.

Below information taken from same article, if you pay attention to couple lines below you will have good foundation and understand how exchange writes data to the database.

When Exchange is running, technically the databases are inconsistent.

Exchange starts, while Exchange is running normally, the databases are technically inconsistent. Why is that so?

The Exchange database engine caches the disk in memory by swapping 4 KB chunks of data, called pages, in and out of memory. It updates the pages in memory and takes care of writing new or updated pages back to the disk. This means that when requests come into the system, the database engine can buffer data in memory so it doesn't have to constantly go to disk. This makes the system more efficient because writing to memory is "cheaper" (or faster) than writing to disk. When users make requests, the database engine starts loading the requests into memory and marks the pages as "dirty" (a dirty page is a page in memory that has been written with data). These dirty pages are then later written to the information store databases on disk.

Although caching data in memory is the fastest and most efficient way to process data, it means that while Exchange is running, the information on disk is never completely up-to-date. The latest version of the database is in memory, and since many changes in memory haven't made it onto disk yet, the database and memory are out of sync

Why LOG files are very important (Many people think databases are the most important)

  • Most people naturally think that the database files are the most important aspect of data recovery. But, transaction log files are actually more important because they reflect what will happen with the data, not what has happened.
  • Transaction log files are a sequence of files whose purpose is to keep a secure copy on disk of volatile data in memory, so the system can recover in the event of a failure.
  • When a change is made to the database, the database engine updates the data in memory and synchronously writes a record of the transaction to the log file that tells it how it could redo the transaction in case the system fails
  • Logically you can think of the data as moving from memory to the log file to the database on disk, but what actually happens is that data moves from memory to the database on disk
  • To keep track of the data that hasn't yet been written to the database file on disk, the database engine maintains a checkpoint file called Edb.chk for every log file sequence. The checkpoint file is a pointer in the log sequence that maintains the status between memory and the database file on disk. It indicates the point in the log file where the information store needs to start the recovery from if there's been a failure. In fact, the checkpoint file is essential for efficient recovery because if it didn't exist, the information store would have to attempt recovery by starting from the beginning of the oldest log file it found on disk and then check every page in every log file to determine whether it had already been written to the database
  • Circular Logging—Don't Use It! And why

    It eliminates your ability to recover all changes since your last backup if your information store is corrupted due to a hardware failure. Remember logs are the duplication of real data being written to the database, it is insurance for exchange if information in the memory vanishes. When you turn on circular login, start deleting all logs as they come in, you are eliminating to your exchange server to recover if it crashes. Therefore you would need your last full backup to get back to business. Or after tuning CL logging it is advices to immediate full backup.

When offline defrag runs, it creates a new database file and then copies all the data in the old file to the new file.

This can take a lot of time. On average, it takes about one hour to defragment 5 to 10 GB.


Recommended Mailbox Size Limits



Oz ozugurlu MVP (Exchange)


MCSE (M+,S+) MCDST, Security+, Server +,Project+

Blog: http://www.smtp25.blogspot.com/