Microsoft Developer & IT Pro Events schedule : August 2015
Use PoshUSNJournal Module to Work with Change Journal
Summary: Boe Prox shows how to use the PoshUSNJournal module to work with the USN change journal.
Honorary Scripting Guy and Windows PowerShell MVP, Boe Prox, here today, filling in for my good friend, the Scripting Guy. I'm finishing up my three-day stint on the Hey, Scripting Guy! Blog. Be sure to catch up by reading:
Today, I will demo my module called PoshUSNJournal and show you how to use it with the change journal.
I have taken you on the journey of using Windows PowerShell along with PInvoke via reflection to hook into the Windows API to view the change journal and look at the entries to see what is happening on your file system. Although all of this can be done with a little bit of work, it is nicer to have the ability to do it via functions from a module.
This is what PoshUSNJournal aims to do. Not only can you do everything that I have already covered, but this module takes it a little further by letting you wait for incoming entries for a near real-time view of what is happening. You can also configure the journal by removing it and re-creating it with a larger or smaller size!
Are you running Windows PowerShell 5.0? Great! You can install this module with pretty much no effort from the Windows PowerShell Gallery:
Install-Module –Name PoshUSNJournal –Verbose
No worries if you do not have Windows PowerShell 5.0 yet. You can grab the module from my GitHub site: PoshUSNJournal. Place it in your modules folder and you are ready to go!
I’ll start off by showing how quickly we can view the journal by using Get-USNJournal with the DriveLetter parameter:
Get-UsnJournal -DriveLetter C:
Pretty cool, but we want more than that! I can delete this journal by using Remove-USNJournal and create a new one that is a little larger than the 30 MB one that we currently have. Maybe something like 50 MB would be better.
Remove-USNJournal –DriveLetter C: -Verbose
A verification using Get-USNJournal shows that it is, in fact, completely gone from my system.
Of course, I need something here to continue demoing the entries, so I will re-create the journal and set it to be 50 MB in size:
New-UsnJournal -DriveLetter C: -Size 50MB -Allocation 8MB –Verbose
With that done, we can now look at tracking the changes in the file system by using Get-USNJournalEntry.
If you view the Help to see the parameters for this function, you will see that you can actually specify USNReasonMask and basically watch the entries in real-time.
A basic run of Get-USNJournalEntry starts at the beginning of when we created the journal and begins showing all of the changes that have occurred since then.
As you can see, some of this (such as the use of SnagIt) has to do with this very post! If you wanted to view everything, you can definitely do that, but keep in mind that you may be waiting awhile because there could potentially be a lot of data to process. Filtering for a specific file or USNReason code will definitely help out here.
Get-UsnJournalEntry | Where {$_.FileName -match '\.psd1$'}
In this case, I wanted to see if I had any .psd1 files that had been updated since I created the journal. It turns out that I did, and I can see that it was actually deleted.
The last thing I will show is monitoring the journal by using the –Tail and –Wait parameters:
Get-UsnJournalEntry -DriveLetter C: -Tail –Wait
Take note of the New Text Document.txt and TestFileToDelete.txt files. These are the same files, but you can see how the first file was created (using right-click on the Desktop and Create new text file). This shows how the file was created prior to me renaming it.
You can see the file is then sent to the recycle bin via its new name: $I27RNAF.txt and its subsequent return from the recycle bin (under the USN_REASON_RENAME_OLD_NAME and USN_REASON_RENAME_NEW_NAME masks). The hard deletion is presented under the USN_REASON_FILE_DELETE mask. You can also see that other changes were made on the file system while I was testing against the text file.
With that, we are done exploring the USN change journal by using Windows PowerShell. We explored two methods—we took a dive using PInvoke with reflection and we used my module, PoshUSNJournal. (This module is available on GitHub and it is always available for pull requests to make it better!)
We invite you to follow the Scripting Guys on Twitter and Facebook. If you have any questions, send email to the Scripting Guy at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. Until then, see ya!
Boe Prox, Windows PowerShell MVP and Honorary Scripting Guy
Update Rollup 10 for System Center 2012 SP1 Operations Manager is now available
Just a quick note to let you know that Update Rollup 10 for System Center 2012 SP1 Operations Manager is now available. The KB article below describes all of the issues that are fixed in OpsMgr 2012 SP1 UR10 and contains complete installation instructions as well.
IMPORTANT Be aware that the SCOM Web Console package for UR10 includes an important security update (https://technet.microsoft.com/en-us/library/security/MS15-086)
For complete details regarding UR10, please see the following:
KB3071088 - Update Rollup 10 for System Center 2012 SP1 Operations Manager (https://support.microsoft.com/en-us/kb/3071088/)
Suraj Suresh Guptha | Program Manager | Microsoft
Get the latest System Center news on Facebook and Twitter:
System Center All Up: http://blogs.technet.com/b/systemcenter/
Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/
Data Protection Manager Team blog: http://blogs.technet.com/dpm/
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/
Operations Manager Team blog: http://blogs.technet.com/momteam/
Service Manager Team blog: http://blogs.technet.com/b/servicemanager
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm
Update Rollup 8 for System Center 2012 Operations Manager is now available
Just a quick note to let you know that Update Rollup 8 for System Center 2012 Operations Manager is now available. The KB article below describes the issue that is fixed in OpsMgr 2012 UR8 and contains complete installation instructions as well.
IMPORTANT Be aware that the SCOM Web Console package for UR8 includes an important security update (https://technet.microsoft.com/en-us/library/security/MS15-086)
For complete details regarding UR8, please see the following:
KB3071089 - Update Rollup 8 for System Center 2012 Operations Manager (https://support.microsoft.com/en-us/kb/3071089/)
Suraj Suresh Guptha | Program Manager | Microsoft
Get the latest System Center news on Facebook and Twitter:
System Center All Up: http://blogs.technet.com/b/systemcenter/
Configuration Manager Support Team blog: http://blogs.technet.com/configurationmgr/
Data Protection Manager Team blog: http://blogs.technet.com/dpm/
Orchestrator Support Team blog: http://blogs.technet.com/b/orchestrator/
Operations Manager Team blog: http://blogs.technet.com/momteam/
Service Manager Team blog: http://blogs.technet.com/b/servicemanager
Virtual Machine Manager Team blog: http://blogs.technet.com/scvmm
Устранение проблем с отклонениями расписания
Практически ни один длительный проект не обходится без отклонений. Чтобы поддерживать контроль над расписанием, руководитель проекта должен знать, когда произошло отклонение и какова его величина. После этого он может внести коррективы в расписание. Чтобы помочь выявить момент, когда произошло отклонение, в плане проекта по выпуску новой книги для детей находится следующая информация:
- Дата крайнего срока, примененная к срочной задаче;
- Базовый план проекта, с которым можно сравнивать фактическое исполнение проекта.
Дата крайнего срока и базовый план проекта помогут вам выявить и устранить проблемы с расписанием.
...(read more)Now in public preview: The Converged Microsoft Account and Azure Active Directory Programming Model
Предварительная версия приложений Skype для бизнеса для iOS и Android
В первые месяцы после выхода Skype для бизнеса мы получили большое количество положительных отзывов, и мы рады сообщить, что сегодня стали доступны предварительные версии версии приложений Skype для бизнеса для iOS и Android. Ниже представлена информация о новых функциях и участии в программе предварительной версии.
...(read more)#PapierlosLernen: Ein digitales Experiment mit Surface 3, OneNote und Office Lens
Ein papierloser Schulstart: Zwei 16-jährige Schüler des Gymnasiums Thomaeum im niederrheinischen Kempen verzichten ein Jahr in der Schule komplett auf Papier. Stattdessen setzen sie auf das Surface 3 und Office 365 mit OneNote. Sie wollen so dem Unterricht besser folgen und sich mit ihren Aufzeichnungen anschließend auch auf Prüfungen und Klausuren vorbereiten. Kann das gut gehen?
Tausche zwei Kilo Papier gegen moderne Technologie
„Zuletzt habe ich jeden Tag zwei Kilo Ordner mit jeder Menge Papier mit mir rumgeschleppt“, erinnert sich Sebastian Franz, der mit seinem Mitschüler Marco Nagels die Idee für das Experiment hatte. „Dabei wird viel von dem Papier anschließend einfach weggeschmissen, weil es nicht zur Klausurvorbereitung taugt“, sagt Sebastian. Das wollen die beiden im kommenden Schuljahr ändern.
Das Surface kennen die Gymnasiasten aus ihrem Bekanntenkreis. Beide schätzen das Tablet als „ausgereiftes und leistungsfähiges Gerät, das im Unterricht mithalten kann“. Mit Office hatten sie bisher nicht viel zu tun, aber Marco kennt sich mit der Notizbuchsoftware OneNote aus, die in ihren Planungen für das kommende Schuljahr eine zentrale Rolle spielen wird - schließlich lassen sich Notizen und Dokumente problemlos über alle Geräte teilen und bearbeiten.
Für jedes Fach ein eigenes Notizbuch in OneNote
„Wir werden in OneNote für jedes Fach ein eigenes Notizbuch anlegen“, berichtet Sebastian. „Tafelbilder und Arbeitsblätter fotografieren wir mit Office Lens ab und speichern sie direkt in OneNote.“ Auch ihre täglichen Unterrichtsnotizen werden sie mit dem Programm erfassen. Ob sie dafür den Surface Pen oder die Tastatur verwenden, ist noch nicht ausgemacht; das werden sie getreu dem Experiment einem Praxistest unterziehen. „Das Gute daran ist, dass wir uns im Unterricht nicht die ganze Zeit Gedanken über die Technik machen müssen. Die funktioniert einfach“, freut sich Sebastian. „Wir können uns vielmehr ganz auf den Stoff konzentrieren.“
Projekt soll für den Einsatz digitaler Technologien im Unterricht werben
In Dirk Brinkmann, Lehrer für Physik und Chemie und nach Auskunft der Schüler der heimliche IT-Chef der Schule, haben sie bereits einen Unterstützer für ihr Projekt gefunden. Dieser hilft den beiden bei allen Fachfragen sowie dabei, ihren ungewöhnlichen Plan auch mit der Schulleitung und anderen Lehrern zu besprechen. Denn selbstverständlich ist das nicht, was beide vorhaben. „Es gibt bestimmt Lehrer, die weniger technikaffin sind und befürchten, dass wir mit dem Surface ihren Unterricht untergraben“, sagt Sebastian. „Aber mit unserem Projekt versuchen wir natürlich, auch sie zu überzeugen.“ Denn IT-Kenntnisse, da ist der Gymnasiast überzeugt, sind wichtig – für die Schule, aber auch für Studium und das spätere Berufsleben, „weil man heute und vor allem in Zukunft an sehr vielen Stellen mit dem Computer und mobilen Geräten arbeitet“.
Aber so weit in die Zukunft mag der Elftklässler noch gar nicht schauen. Erst einmal erhofft er sich mit seinem Mitschüler auch direkte Vorteile von Surface und Office 365 für seinen täglichen Schulalltag: „Wir denken, dass wir besser lernen, weil wir mit OneNote mehr Struktur in den Stoff bringen können, als das auf Papier möglich ist. Das wird uns dabei helfen, genauer zu unterteilen und zu sortieren, was für Klausuren und Prüfungen wirklich wichtig ist und was nicht.“
Digitale Souveränität lernt man nicht theoretisch; nur mit der Praxis kommt man weiter. Direkt aus dieser täglichen Praxis werden Sebastian und Marco an dieser Stelle unter dem Hashtag #PapierlosLernen berichten. Über das gesamte Schuljahr 2015/16 werden sie uns an ihrem papierlosen Experiment teilhaben lassen. Ich bin gespannt!
Ein Beitrag von Diana Heinrichs (@dianatells)
Communications Manager New Workstyle bei Microsoft Deutschland
- - - -
Über die Autorin
Die neue Welt des Arbeitens ist im DAX angekommen – technologisch. Mittlerweile vertrauen zwei Drittel der DAX-30-Konzerne auf die Microsoft Cloud. Dabei sind sich die Exportschlager von A wie adidas über B wie BASF bis H wie Henkel bewusst, dass der Weg zum vernetzten Arbeiten über die Cloud kein rein technologischer ist. Es betrifft die Unternehmenskultur und den Führungsstil genauso wie den Mitarbeiter und seinen Arbeitsalltag. Die Brücke von der technischen zur kulturellen Perspektive zu schlagen, ist Diana Heinrichs Aufgabe als Communications Manager Vernetztes Arbeiten im Microsoft PR Team. Dafür führt sie ihren Twitter-Kanal stets mit sich – und genießt ein Leben jenseits von 9 to 5.
Office 365 を利用するための Office のシステム要件
Office 365 ではメインストリーム サポート対象の Office で動作するように設計されています。そのため、
Office 365 を利用するにあたっての Office のシステム要件は、メインストリーム サポート対象バージョンの Office となります。
詳細については Office 365 のシステム要件をご覧ください。
メインストリーム サポート対象の Officeクライアント
* Office 2010 Service Pack 2 (SP2)
* Office 2013
マイクロソフトでは最新の更新プログラムを随時 Exchange Online に適用しそれにあわせてOffice の更新プログラムを提供しています。快適にご利用いただくため、Office のバージョンも最新に更新してご利用いただくことをおすすめしています。
補足
・延長サポート対象の Office クライアントは Office 365 のサービスを引き続き利用することはできますが、
Office 365 で動作するように設計されていません。
問題が発生する可能性があるため、メインストリーム サポート対象の Office クライアントになるように更新してご利用ください。
延長サポート対象の Office クライアント
* Office 2007
* Office 2010 サービスパック未適用 (RTM)
* Office 2010 SP1
・Office 2010 を利用する場合は、メインストリーム対象とするために SP2 を適用して利用する必要があります。
・Office 2013 の最新のサービスパックは SP1 です。
現時点では SP1 が適用されていなくてもメインストリーム サポート対象です。
(将来 SP2 がリリースされ 12 か月 (1 年) 経過した時に、SP1 は延長サポート対象となり SP2 の適用が必要となります)
・Office 365 Professional Plus (クイック実行版) を利用している場合は自動的に最新に更新されます。
(ODT (Office 展開ツール) を使用してビルドを指定している場合や
自動更新を許可しないように指定していて手動更新を行っていない場合は、更新されません)
・MSI 版の Office 2013/2010 を利用している場合は、
WSUS/SCCM/Microsoft Update などを使用して最新に更新されるように構成する必要があります。
・Outlook の最新の更新プログラムと詳細バージョン (ビルド番号) は以下の弊社 Web サイトから確認できます。
Outlook and Outlook for Mac Update Build Numbers
現在インストールされている Outlook のビルド番号を Outlook.exe のプロパティか以下から確認し、
最新に更新されていない場合は最新に更新することをご検討ください。
2013 [ファイル] タブ-[Office アカウント]-[Outlook のバージョン情報] の Microsoft Outlook 2013 の後のビルド番号
2010 [ファイル] タブ-[ヘルプ]-[バージョンと著作権の追加情報] の Microsoft Outlook 2010 の後のビルド番号
2007 [ヘルプ] メニュー-[バージョン情報] の Microsoft Office Outlook 2007 の後のビルド番号
注意: 以下に表示されるビルド番号は MSO.DLL のビルド番号です。必ず上記から確認してください。
2013 [ファイル] タブ-[Office アカウント] の [更新オプション] の右側に表示されるビルド番号
2010 [ファイル] タブ-[ヘルプ] の [バージョン:] の後に表示されるビルド番号
2007 [ヘルプ] メニュー-[バージョン情報] の MSO の後のビルド番号
・Office の最新の更新プログラムは Microsoft Update カタログのサイトから製品名で検索することで
確認することができます (Outlook の更新プログラムも確認できます)。
Office の更新プログラムは多数の種類あるため、WSUS/SCCM/Microsoft Update を使用して
自動的に検出し適用することをおすすめいたします。
・Office 2007 のメインストリーム サポートは 既に 2012 年 10 月 12 日に終了しています。
・Office 2010 のメインストリーム サポートは、2015 年 10 月 13 日に終了します。
・Office 2013 のメインストリーム サポートは、2018 年 4 月 10 日に終了します。
サポートライフサイクルについての参考資料
Office 2013 のサポートライフサイクル
Office 2010 のサポートライフサイクル
Office 2007 のサポートライフサイクル (メインストリームサポートが終了しているため、延長サポート対象)
マイクロソフト サポート ライフサイクル
Service Pack のサポート ライフサイクル ポリシー
ビジネスおよび開発者向けの Online Services
20158/12 変更履歴
以下の記述を [補足] セクションに移動しました。
・延長サポート対象の Office クライアントは Office 365 のサービスを引き続き利用することはできますが、
Office 365 で動作するように設計されていません。
問題が発生する可能性があるため、メインストリーム サポート対象の Office クライアントになるように更新してご利用ください。
延長サポート対象の Office クライアント
* Office 2007
* Office 2010 サービスパック未適用 (RTM)
* Office 2010 SP1
Office Online ライセンスのチェックボックスが機能していない問題について
こんにちは、SharePoint サポート チームの関 友香です。
今回の投稿では、特定のユーザーに対して Office Online のライセンスを付与していない場合でも、 Office Online の使用を制御することができない動作について説明します。
現象の説明
Office 365 ではユーザー毎に特定の Office 365サービスのライセンスを割り当てることで、そのユーザーが利用できるサービスを管理することができます。各ユーザーのライセンスの割り当て画面 (下図) において必要なサービスに対してチェックボックスを有効とすることで、対応するサービスを利用させることが可能となりますが、現時点の動作では Office Online のチェックボックスの設定に関わらず、無効とした場合でも Office Online の機能が使用できるという動作が報告されております。
原因
本現象は、製品の既知の問題により発生しています。
留意点
本動作に関しましては、将来的に修正が行われる可能性があり、Office Online のライセンスが付与されていない状態で運用頂いていた場合に突然 Office Online の機能が使用できなくなるといった状況に陥る可能性がございます。今後も Office Online のサービスを利用させたいアカウントにつきましては、Office Online のライセンスを付与して運用を頂けますようお願い申し上げます。
現時点で考えられる対処策
ユーザー毎に Office Online の使用制限を行うことは現時点ではできません。
SharePoint Online 上で Office Online 機能の使用を制限する必要がある場合には、サイトコレクション単位やライブラリ単位でOffice 文書を強制的に クライアント アプリケーションを使用して開く設定を実施することで制限する方法がございます。しかしながらこの設定では、ユーザー単位での制御を行うことができません。下記に補足情報として左記の設定方法を記載いたしますので、ご参考にしていただければ幸いです。
[補足情報]
A. サイトコレクション単位で Office Online の使用を制限する方法
以下の方法にて、既定でOffice クライアント アプリケーションでドキュメントを開くように、サイトコレクション単位で設定する事が可能です。
1) サイトコレクションの管理者で対象のサイトコレクションにアクセスします。
2) [歯車ボタン] - [サイトの設定] をクリックします。
3) [サイトコレクションの管理] セクション の [サイト コレクションの機能] をクリックします。
4) [既定でクライアント アプリケーションでドキュメントを開く] 機能の "アクティブ化" をクリックします。
B. ライブラリ単位で Office Online の使用を制限する方法
以下の方法にて、既定でOffice クライアント アプリケーションでドキュメントを開くように、ライブラリ単位で設定する事が可能です。
1) サイトの管理者で対象のライブラリにアクセスします。
2) リボンメニューの [ライブラリの設定] をクリックします。
3) [全般設定] セクションの [詳細設定] をクリックします。
4) [ブラウザー対応ドキュメントを表示する既定の方法] を "クライアント アプリケーションで開く" に設定します。
今回の投稿は以上になります。
Why Exchange 2013 CU6+ use out-of-site DCs/GCs
We have few escalations from our customers, who recognized huge traffic between Exchange 2013 CU6+ and out-of-site DCs/GCs.
When we Get-ExchangeServer –Status we can see that Exchange uses out-of-site DCs but at a same time in event 2080 we can see that other In-Site DC are availible.
Here is how it looks in Exchange 2010 and Exchange 2013 RTM – CU5:
I used topology with 4 DC in-Site and 1 Out-Site
From event 2080:
Process Microsoft.Exchange.Directory.TopologyService.exe (PID=2276). Exchange Active Directory Provider has discovered the following servers with the following characteristics:
(Server name | Roles | Enabled | Reachability | Synchronized | GC capable | PDC | SACL right | Critical Data | Netlogon | OS Version)
In-site:
DC001.CU1.com CDG 1 7 7 1 0 1 1 7 1
dc2.CU1.com CDG 1 7 7 1 0 1 1 7 1
DC3.CU1.com CDG 1 7 7 1 0 1 1 7 1
dc4.CU1.com CDG 1 7 7 1 0 1 1 7 1
Out-of-site:
dc5.CU1.com CDG 1 7 7 1 0 1 1 7 1
Get-ExchangeServer exch5-cu1 -Status
CurrentDomainControllers : {dc2.CU1.com, DC001.CU1.com, dc4.CU1.com, DC3.CU1.com}
CurrentGlobalCatalogs : {dc2.CU1.com, DC001.CU1.com, dc4.CU1.com, DC3.CU1.com}
CurrentConfigDomainController : DC001.CU1.com
>netstat -n | findstr 3268
We can see established connections with all 4 GCs
Turn off DC4
Information MSExchange ADAccess 2070 Topology:
Process MSExchangeHMWorker.exe (ExHMWorker) (PID=3116). Exchange Active Directory Provider lost contact with domain controller dc4.CU1.com. Error was 0x34 (Unavailable) (Active directory response: The server is unavailable.). Exchange Active Directory Provider will attempt to reconnect with this domain controller when it is reachable.
Get-ExchangeServer exch5-cu1 -Status
CurrentDomainControllers : {DC001.CU1.com, DC3.CU1.com, dc2.CU1.com }
CurrentGlobalCatalogs : {DC001.CU1.com, DC3.CU1.com, dc2.CU1.com}
CurrentConfigDomainController : DC001.CU1.com
>netstat -n | findstr 3268
We can see established connections with 3 GCs
Turn off DC3
CurrentDomainControllers : {DC001.CU1.com, dc2.CU1.com}
CurrentGlobalCatalogs : {DC001.CU1.com, dc2.CU1.com}
>netstat -n | findstr 3268
We can see established connections with 2 In-Site GCs
Turn off DC2
CurrentDomainControllers : {DC001.CU1.com}
CurrentGlobalCatalogs : {DC001.CU1.com}
CurrentConfigDomainController : DC001.CU1.com
>netstat -n | findstr 3268
We can see connections only to DC001
From event 2080:
Process Microsoft.Exchange.Directory.TopologyService.exe (PID=2276). Exchange Active Directory Provider has discovered the following servers with the following characteristics:
(Server name | Roles | Enabled | Reachability | Synchronized | GC capable | PDC | SACL right | Critical Data | Netlogon | OS Version)
In-site:
DC001.CU1.com CDG 1 7 7 1 0 1 1 7 1
dc2.CU1.com CDG 1 0 0 0 0 0 0 0 0
DC3.CU1.com CDG 1 0 0 0 0 0 0 0 0
dc4.CU1.com CDG 1 0 0 0 0 0 0 0 0
Out-of-site:
dc5.CU1.com CDG 1 7 7 1 0 1 1 7 1
In other words: we do not try to establish connection to Out-of-site DC while we have at least one In-site DC availible.
What happenes as soon as you update your servers to CU6+:
Get-ExchangeServer exch5-cu1 -Status
CurrentDomainControllers : {dc2.CU1.com, DC001.CU1.com, dc4.CU1.com, DC3.CU1.com}
CurrentGlobalCatalogs : {dc2.CU1.com, DC001.CU1.com, dc4.CU1.com, DC3.CU1.com}
CurrentConfigDomainController : DC001.CU1.com
>netstat -n | findstr 3268
We can see established connections with all 4 GCs
Same as in RTM
Turn off DC4
Get-ExchangeServer exch5-cu1 -Status
CurrentDomainControllers : {DC001.CU1.com, DC3.CU1.com, dc2.CU1.com }
CurrentGlobalCatalogs : {DC001.CU1.com, DC3.CU1.com, dc2.CU1.com}
CurrentConfigDomainController : DC001.CU1.com
>netstat -n | findstr 3268
We can see established connections with 3 GCs
Same as RTM
Turn off DC3
CurrentDomainControllers : {DC001.CU1.com, dc2.CU1.com, dc5.CU1.com}
CurrentGlobalCatalogs : {DC001.CU1.com, dc2.CU1.com, dc5.CU1.com}
CurrentConfigDomainController : DC001.CU1.com
NEW!!!
We established connection to Out-of-Site DC dc5.cu1.com
It is by design. Saying that if number of in-site DCs are less than MinSuitableServer, which is by default 3, out-site DCs will be used. Once the number of in-site DCs is larger than MinSuitableServer, out-site DCs should not be used any more.
Previously when Exchange process asks for domain controllers, topology service only returns servers from either In-Site list or Out-of-Site list. That says, as long as there is one single DC suitable in In-Site list, topology service will return it back and does not further search Out-of-Site list, no matter how many is requested by the client.
This might cause some load unbalanced issue, especially during site failover. Good domain controllers left in the being failed out site take much more load than outside DCs.
To fix this, a new configurable setting, MinSuitableServer, is introduced. Topology service will first check whether there are enough suitable servers in In-Site list. If no, it will add servers from Out-of-Site list. Similar change is done in topology discovery, too.
How we can return it back or configure?
If we really want to use in-site DCs only, even though there is just 1 available (as it was in 2010 or 2013 RTM-CU5), we can add an entry:
MinSuitableServer = "1"
in Microsoft.Exchange.Directory.TopologyService.exe.config:
In section <Topology MinimumPrefixMatch = "2"
EnableWholeForestDiscovery = "true"
MinSuitableServer = "1" <----------ADD THIS VALUE
ForestWideAffinityRequested = "true"/>
I turned DC4 off as we do not need it
Also I added MinSuitableServer = "2" and restarted Microsoft Exchange Active Directory Topology aka MSExchangeADTopology or whole server
CurrentDomainControllers : {DC3.CU1.com, dc2.CU1.com, DC001.CU1.com}
CurrentGlobalCatalogs : {DC3.CU1.com, dc2.CU1.com, DC001.CU1.com}
CurrentConfigDomainController : dc2.CU1.com
Turn DC3 off
From event 2080:
Process Microsoft.Exchange.Directory.TopologyService.exe (PID=2504). Exchange Active Directory Provider has discovered the following servers with the following characteristics:
(Server name | Roles | Enabled | Reachability | Synchronized | GC capable | PDC | SACL right | Critical Data | Netlogon | OS Version)
In-site:
DC001.CU1.com CDG 1 7 7 1 0 1 1 7 1
dc2.CU1.com CDG 1 7 7 1 0 1 1 7 1
DC3.CU1.com CDG 1 0 0 0 0 0 0 0 0
dc4.CU1.com CDG 1 0 0 0 0 0 0 0 0
Out-of-site:
dc5.CU1.com CDG 1 7 7 1 0 1 1 7 1
[PS] C:\Windows\system32>Get-ExchangeServer Exch5-cu1 -Status | fl Current*
CurrentDomainControllers : {DC001.CU1.com, dc2.CU1.com}
CurrentGlobalCatalogs : {DC001.CU1.com, dc2.CU1.com}
CurrentConfigDomainController : dc2.CU1.com
Turn off DC2
CurrentDomainControllers : {DC001.CU1.com, dc5.CU1.com}
CurrentGlobalCatalogs : {DC001.CU1.com, dc5.CU1.com}
CurrentConfigDomainController : DC001.CU1.com
Start DC3
CurrentDomainControllers : {DC001.CU1.com, DC3.CU1.com}
CurrentGlobalCatalogs : {DC001.CU1.com, DC3.CU1.com}
CurrentConfigDomainController : DC001.CU1.com
So we returned back to In-site DC as soon it became available.
Now set MinSuitableServer = "1"
CurrentDomainControllers : {dc2.CU1.com, DC3.CU1.com, DC001.CU1.com}
CurrentGlobalCatalogs : {dc2.CU1.com, DC3.CU1.com, DC001.CU1.com}
CurrentConfigDomainController : DC001.CU1.com
Turn off DC2
CurrentDomainControllers : {DC3.CU1.com, DC001.CU1.com}
CurrentGlobalCatalogs : {DC3.CU1.com, DC001.CU1.com}
CurrentConfigDomainController : DC001.CU1.com
Turn off DC3
CurrentDomainControllers : {DC001.CU1.com}
CurrentGlobalCatalogs : {DC001.CU1.com}
CurrentConfigDomainController : DC001.CU1.com
In other words: same as it were in 2010 and 2013 RTM-CU5.
Data Culture
One of the books I frequently refer people to is Ralph Kimball’s Data Warehouse Lifecycle Toolkit, in it he notes that a culture of analysis is critical to the success of any business intelligence project. I would suggest that a data culture is critical to the success of any business - where before wars have been lost because of poor intelligence, today it is market share and profit that are the casualties.
Another of those critical success factors in Kimball’s book is cooperation between IT and the business. It is something that is very true today and I would submit that ‘us’ data professionals need to be clearer on what the business priorities are. By the same token, business folks need to be a bit more tech savvy so that we can all collaborate to sharpen our businesses.
That’s pretty much the way I work; Amy (the data scientist on our team) and I have been having coffee with the marketing chaps who have been running our Data Culture series in order to ensure that we are meeting these aims when we start running the series again in the autumn. To be clear, we can’t fix all of this, but what we can do is split the event into to two days to make them more relevant. For business users, day one will focus on high level demos, real world scenarios and discussion about Return on Investment, compliance and so on. However, while we may also cover this on the new second day of the event, we’ll be running hands on sessions in four tracks…
Track |
| Technologies |
| Learn how to capture real time data from sensors to deliver insight across an IoT solution. Get hands onto sensors such as Raspberry Pi and Leverage Azure Event Hub and Stream Analytics to collect and process sensor data and visualise in PowerBI. | Azure Event Hub |
| Learn how to build a data lake and then leverage Hadoop or elastic data warehouse technologies to deliver your big data solutions. Learn how to leverage data source discovery and orchestrate and manage data transformations and movements to given you control of your data estate. | Azure Data Lake |
Machine Learning | Learn how to build a predictive model and deploy with ease with AzureML. Learn how to leverage R and Python to extend your models as well as leveraging Microsoft world class algorithms. | |
Visualisation & Data Discovery | Learn how to bring your data to life with PowerBI by connecting and transforming your data into insight. Visually explore and create stunning interactive reports and see all your data through a single pane of glass with live dashboards. | PowerBI |
…which you can pick. Some of you may wish to come to a different track and come more than once; some may come as a team and spread out. These events will be run by ‘softies like Amy and I - and we’ll have MVP’s along to help as well, like Allan Mitchell on data integration and Andy Cross on big data.
As for the dates, so far we have 22nd/23rd September in London, and December 8th/9th in Birmingham.
We can’t teach you too much about the business you work in, but hopefully these events will be a good way to span the data conversation across the business and IT. Places will be limited, so don’t delay and register now on the Data Culture site!
クラスターのハートビート通信の設定値の範囲について
こんにちは。Windows プラットフォーム サポートの永岡です。
今回のトピックはクラスターのハートビート通信に関わるパラメーターの設定値 (閾値) に
ついて分かりやすくご紹介したいと思います。
ハートビートとはクラスター ノード間で互いのノードの正常性を確認するための仕組みで、
詳細につきましては以下の記事がございます。ハートビート通信の設定変更手順などの
記載がございますので、こちらにつきましても、ぜひご一読ください。
フェールオーバー クラスターのハートビートについて
http://blogs.technet.com/b/askcorejp/archive/2012/03/22/3488080.aspx
このハートビートに関するパラメータの設定可能な範囲については、Windows Server 2008 および
Windows Server 2012 / 2012 R2 では公開された情報がありましたが、Windows Server 2008 R2 における
設定可能値の情報は、公開されておりませんでした。
そのため、当ブログでは Windows Server 2008 R2 を含めた OS バージョン毎のハートビート通信の
設定値の範囲についてまとめさせていただきます。
■ Windows Server 2008 における設定値の範囲 | |||
Parameter | Default | Minimum | Maximum |
SameSubnetDelay | 1000 ミリ秒 (1秒) | 250 ミリ秒 (0.25 秒) | 2000 ミリ秒 (2 秒) |
SameSubnetThreshold | 5 回 | 3 回 | 10 回 |
CrossSubnetDelay | 1000 ミリ秒 (1秒) | 250 ミリ秒 (0.25 秒) | 4000 ミリ秒 (4 秒) |
CrossSubnetThreshold | 5 回 | 3 回 | 10 回 |
■ Windows Server 2008 R2 における設定値の範囲 | |||
Parameter | Default | Minimum | Maximum |
SameSubnetDelay | 1000 ミリ秒 (1秒) | 250 ミリ秒 (0.25 秒) | 2000 ミリ秒 (2 秒) |
SameSubnetThreshold | 5 回 | 3 回 | 10 回 |
CrossSubnetDelay | 1000 ミリ秒 (1秒) | 250 ミリ秒 (0.25 秒) | 4000 ミリ秒 (4 秒) |
CrossSubnetThreshold | 5 回 | 3 回 | 20 回 |
■ Windows Server 2012 / 2012 R2 における設定値の範囲 | |||
Parameter | Default | Minimum | Maximum |
SameSubnetDelay | 1000 ミリ秒 (1秒) | 250 ミリ秒 (0.25 秒) | 2000 ミリ秒 (2 秒) |
SameSubnetThreshold | 5 回 | 3 回 | 120 回 |
CrossSubnetDelay | 1000 ミリ秒 (1秒) | 250 ミリ秒 (0.25 秒) | 4000 ミリ秒 (4 秒) |
CrossSubnetThreshold | 5 回 | 3 回 | 120 回 |
上記の通り、Windows Server 2008 R2 では同一サブネットにおけるハートビート通信の閾値を最大 20 秒、
異なるサブネット間では最大 80 秒まで設定可能となっています。
負荷の高いネットワークであったり、 WAN 環境などのパケット遅延が発生しうる環境では、これらの値を
変更することで一時的なハートビート通信遅延などによる障害検知の影響を軽減することができます。
しかしながら、ハートビート通信の閾値を必要以上に延長すると、クラスター ノードにおける障害検知 (例えば
ブルースクリーンでノードが落ちた場合等) が遅くなるリスクがあります。そのため、TCP/IP やアプリケーションが
もつ各種タイムアウト等を考慮に入れ、お客様の運用方針に合わせた適切な値を設定いただければと思います。
[参考資料]
・ Windows Server 2012 / 2012 R2 の設定値の範囲
SameSubnetDelay
https://msdn.microsoft.com/en-us/library/jj151938(v=vs.85).aspx
SameSubnetThreshold
https://msdn.microsoft.com/en-us/library/jj151939(v=vs.85).aspx
CrossSubnetDelay
https://msdn.microsoft.com/en-us/library/jj151927(v=vs.85).aspx
CrossSubnetThreshold
https://msdn.microsoft.com/en-us/library/jj151928(v=vs.85).aspx
・ Windows Server 2008 の設定値の範囲
Clustering and High-Availability
http://blogs.msdn.com/b/clustering/archive/2012/11/21/10370765.aspx
Streaming, 4K und Cross-Plattform Spiele: Windows 10 auf der gamescom
Es war voll, es war stickig, es war heiß, es war Games – es war Windows 10. Auf der gamescom in Köln stand das neue Betriebssystem zwar nicht im Mittelpunkt, war aber doch oft zu sehen, und das nicht nur am Stand von Microsoft. PC-Hersteller wie XMG oder One zeigten ihre brandneuen Gaming-Maschinen gleich mit Windows 10, auch bei diversen Spieleherstellern gab es schon Windows 10 auf den PCs und den Xbox-One-Controller als perfektes Steuerinstrument auch für den PC gleich dazu.
Am Microsoft-Stand machten ein Klassiker und zwei Neuerscheinungen den Großteil der PC-Präsentationen aus. Da ist zunächst die neue Windows 10 Version von Minecraft zu nennen. Fans konnten sie mit Touchscreen, Controller oder Maus und Tastatur ausprobieren. Diese Version ist für bisherige Minecraft-Käufer auf dem PC zur Zeit übrigens kostenlos, alle anderen können preiswert in das Minecraft-Universum einsteigen. Mit „Fable Legends“ und „Gigantic“ konnte man die ersten beiden „Cross-Plattform“-Spiele ausprobieren. Sie erscheinen nicht nur zeitgleich für Windows 10 und Xbox One, Gamer können auch ihren Spielfortschritt oder erworbene Charakterdetails zwischen den Systemen beliebig hin- und hertauschen.
Im Windows-Bereich des Standes gab es nicht nur die neuesten Spiele-PCs zu bewundern, die mit 4K Ultra-HD-Auflösung und anderen technischen Neuheiten glänzen, es konnte auch das Xbox-Gamestreaming live getestet werden, bei dem das Spiel auf der Xbox im Wohnzimmer quasi ferngesteuert vom PC im Arbeitszimmer gespielt werden kann. Eine tolle Sache, wenn am großen Fernseher Filmabend läuft, die Xbox aber nicht in ein anderes Zimmer geschleppt werden soll.
Windows 10 konnte auf der gamescom unter Beweis stellen, dass gerade Gamer schnell beim kostenlosen Upgrade zuschlagen sollten. Nicht nur, dass der PC schneller und sicherer läuft, neue Spieletechnologien wie Virtual Reality werden von Haus aus von Windows 10 unterstützt und neue Spiele werden schon bald mit Windows 10 und dem dort exklusiven DirectX 12 noch realistischer aussehen.
Ein Beitrag von Boris Schneider-Johne (@Win10Boris)
Produktmanager Windows 10 bei Microsoft Deutschland
BlogMS Microsoft Team Blogs – July 2015 Roll-up
BlogMS consolidates a large number of highly relevant and up to date information sources across the Microsoft product and online services portfolio. You can expect to find important announcements and details of Microsoft news, product releases, service packs and important support issues.
There are a significant number of articles published on a monthly basis, the intention of this monthly report is for those who have missed the weekly updates which you can find published on BlogMS. All the blogs are grouped into logical categories so you can quickly skim the entire document and find the most relevant information which is important for you.
The PDF report can be found attached to this posting:
208 Microsoft Team blogs searched, 79 blogs have new articles.
927 new articles found searching from 01-Jul-2015 to 31-Jul-2015
Reconciling SharePoint 2013 Content Database Size to Total Size of Site Collections
Have you every looked at the total physical size of content database (DB) files in SQL Server and wondered why it doesn't reconcile with the total size of site collections contained in the database? What you'll probably observe is the total size of all site collections in a content DB will be considerably less that the database size property you see in SQL Server Management Studio. This blog post with explore some reasons why.
As an example, I took a quick look in my own SharePoint 2013 lab environment at a web application which had 1 content DB. Following is what I observed for SQL vs. SharePoint size proportion...
Web application: http://intranet.contoso.com
Content database: WSS_content_intranet
Per SharePoint
• Site Collection Count : 19
• Sum of Size in MB : 110.757159233093
Per SQL Server Management Studio
Database: WSS_Content_Intranet
Properties:
• Database Size: 206.06 MB
• Database Space Available: 1.09 MB
C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\WSS_Content_Intranet.mdf
154 MB
C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\WSS_Content_Intranet_log.ldf
52.MB
Total size of all site collections was about ½ of the database size property in SQL Server Management Studio. What might explain this?
The SQL database size is the physical size of the database files (mdf and ldf). It is queried by the sp_helpdb stored procedure. This size includes all SQL data related to the database: transactions logs, indexes, SQL database schema, permissions and tables metadata. Depending on SQL administration it may include pre-allocated space for future growth - around 10-25%.
For SQL Server, actual content size can fluctuate up and down over time. When has backup happened? That will commit all transactions to the DB and the size will become smaller. The mdf file size can be smaller than the DB size in SQL Management studio, that is because Management studio shows the current database size including uncommitted transactions in the transaction log(ldf).
Why is SharePoint’s report of current site usage so much less that what shows in SQL content database? There is no real direct correlation between the database size and content size. One reason is SharePoint will display the size of the content stored within the Content Database, but it won't include the size of the whitespace, that is the disk space currently allocated to the database files to permit more content to be uploaded. As users add content to their SharePoint sites, the content database will naturally grow. To have a content database size that is larger than the content is usually a good thing because it gives SharePoint room to grow without the performance hit of having to auto grow the DB at every write/save.
If the database size relation to the content size gets unreasonably big then a shrink might be motivated, but do proper research on the effects first. Shrinking database files should only be done in very specific circumstances (NEVER automatically or as part of a maintenance plan) because they cause horrible fragmentation of your databases. A good candidate is the situation where you just moved a large site collection to another content database and you leave your original database with 80% of unused space for example. Database files grow, that's what they do.
If a user uploads ten 1 MB document to a site, you’d see the site content in SharePoint’s report increase by about 10 MB. At the same time, the SQL database size either may not change in case space has already been allocated for this data; or the change will be greater as SQL needs to keep auxiliary structures such as transactions logs, indexes, document properties in order to store these documents. So, there will be storage overhead added by SharePoint and possibly by SQL auxiliary data.
The site collection size SharePoint reports does not include SharePoint permissions structures, audit and events data. It is comprised of the total size of the following data:
- documents
- doc versions
- list items
- account personalization info
- webparts
- document metadata
- Recycle bin (1st stage end-user recycle bin)
How is the SharePoint’s storage usage calculation for a site collection done? For inquiring minds who want to know… SharePoint scans AllDocs, AllFileFragments, AllWebs, AllLists, AllDocVersions, AllUserData, AllWebParts, Persontalization, ContentTypes, and RecycleBin for the total size. Please note, the DocStreams table is not considered in this algorithm. There’s a gap between the AllDocs and DocStreams for the same file. The actual file in binary form is stored in DocStreams. AllDocs table stores the meta info. Thus, the DocStreams table will be huge if there’re lots of files stored in the site. That being said, now the question is why DocStreams is not considered in the calculation algorithm. The reason is a) this is an auxiliary table that only contains binary information; b) the information in this table does not actually reflect the file size, but size of the binary length. Therefore, it is not considered. As a result, the content size stored in the database for a site collection takes more space than reflected in SharePoint’s storage usage calculation.
You might wonder since the calculated site collection size and actual content size are not aligned, what’s the point? This number will be the same once the site collection is backed up to disk (using stsadm -o backup or backup-spsite). If you’re interested, the size reported in the SharePoint UI is from SPSite.UsageInfo.Storage (http://msdn.microsoft.com/EN-US/library/microsoft.sharepoint.spsite.usageinfo.storage(v=office.15).aspx ) and its internal name is SiteDiskUsed. You can manually force a re-calculation using SPSite.RecalculateStorageUsed (http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spsite.recalculatestorageused(v=office.15).aspx ).
What features in SharePoint 2013 can influence the size of content databases? The following SharePoint Server 2013 features can significantly affect the size of content databases:
- Recycle bins: Until a document is fully deleted from both the first stage and second stage recycle bin, it occupies space in a content database. Calculate how many documents are deleted each month to determine the effect of recycle bins on the size of content databases. For more information, see “Configure Recycle Bin settings in SharePoint Server 2013.”
- Auditing: Audit data can quickly compound and use large amounts of space in a content database, especially if view auditing is turned on. Rather than letting audit data grow without constraint, we recommend that you enable auditing only on the events that are important to meet regulatory needs or internal controls. If you enable auditing within a site collection, use the following guidelines to estimate the space that you must reserve for auditing data:
- Estimate the number of new auditing entries for a site, and multiply this number by 2 KB (entries generally are limited to 4 KB, with an average size of about 1 KB).
- Based on the space that you want to allocate, determine the number of days of audit logs you want to keep.
- TIP! Remember to also configure automatic audit log trimming for your site collection. Site Settings > Site Collection Administration > Site collection audit settings > Audit Log Trimming
- Gradual site delete: When you delete a site collection, it’s not immediately completely deleted from the source content database just the flag for deletion has been set to 1 in the database for that site collection. You will find a timer job in central administration, scope to the web app, which is called “Gradual site delete.” This job needs to run to delete the whole site collection from the database. The timer job is scheduled by default to run daily, but you can also force it to run manually. When that job runs the source database should get smaller for the size of the site collection removed.
ADDITIONAL REFERENCES
Software boundaries and limits for SharePoint 2013
http://technet.microsoft.com/en-us/library/cc262787.aspx
Storage and SQL Server capacity planning and configuration (SharePoint Server 2013)
https://technet.microsoft.com/en-us/library/cc298801.aspx
Configure Recycle Bin settings in SharePoint Server 2013
https://technet.microsoft.com/en-us/library/cc263125.aspx
Timer job reference (SharePoint 2013)
https://technet.microsoft.com/en-us/library/cc678870.aspx
Best practices for SQL Server in a SharePoint Server farm
http://technet.microsoft.com/en-us/library/hh292622.aspx
PowerTip: View All Available Modules
Summary: Boe Prox shows how to view all available modules in Windows PowerShell.
How can I view all of the available modules in my current Windows PowerShell session?
Run Get-Module with the –ListAvailable parameter:
Get-Module –ListAvailable
Demystify Offline Files
Thanks to Hubert Sachs from the German networking team to translate one of his popular blog into English. The blog demystify the core Offline Files concepts. The content below is from Hubert.
***************************************************
Hello everyone,
From time to time we receive support inquiries about Offline File related problems. The symptoms range from clients which do not come online anymore, which cannot sync, which throw plenty of sync-errors or sync-conflicts up to paths that cannot be reached in SlowLink mode although there should be no offline files active on these paths.
These problems have one thing in common: the root cause can be found in the employed Offline File concepts and can only be solved through a new and sustainable concept for the deployment of Offline Files in that environment.
The migration from an old concept to a new concept often implies the usage of the FormatDatabase Registry Keys (http://support.microsoft.com/kb/942974/en-us) to clean up the faulty client configuration. Obviously this brings up questions regarding backup of the offline available data on the clients (that might not have synced for months), which usually lead to months long migration projects.
To avoid the lengthy migration, I want to help the Administrators to build sound Offline Files concepts, to enable them develop flexible plans.
The following are some key concepts in Offline Files.
The 5 ways of offline availability
There are 5 ways files/folders can get into the Offline Cache of the Client and a related scope and partnership will be created within Offline Files. We need to get administrative control over all of them and maintain them.
- Files made offline available automatically by Folder Redirection.
When configuring a Folder Redirection policy, an Offline Partnership will be automatically created for each redirected folder.
Through configuration of “Do not automatically make Redirected Folders available Offline“, this automatism can be disabled. See http://gpsearch.azurewebsites.net/#293
- Administrative assigned Offline Files
The admin can configure paths in a group policy that will be made available offline by the client. See http://gpsearch.azurewebsites.net/#2056
- User making content ‘always available’ by selecting this option in the context menu of a file/ folder
By default the user can decide what paths he/she wants to make offline available.
This can and should be disabled by the Policy “Remove Make Available Offline Command” in order to not get unforeseen partnerships e.g. on group drives. See: http://gpsearch.azurewebsites.net/#7857
- Caching settings of a share on a file server
For each CIFS share these settings are available:
“Only the files and programs that users specify are available offline” (basically: The Client can cache if it wants to) which is the default in Windows,
“No files and programs from the shared folder are available offline” (basically: No caching allowed here), and “All files and programs that users open from the shared folder are automatically available offline” (basically: The Client has to cache).
When the Client access a share that is configured to “The Client has to cache” the Client will create a sync partnership and will start to sync the files that have been opened to the client.
In that Regards the settings on the respective shares have to be checked / corrected.
- LinkTargetCaching behavior when creating an .lnk file on an offline available path
Per default the target of .Ink Shortcuts will be made offline available if the .Ink file itself is made offline available.
A user could therefore unintentional create an Offline partnership against a group share if this target is on a share that isn’t yet offline available. This option is by default enabled to make sure that users have the expected files available when working with shortcuts. The drawback is that slow link policies will also apply to this new share and data that isn’t cached will not be available any more when the share transitions to slow connection.
This can be controlled by the LinkTargetCaching Registry Key:
http://support.microsoft.com/kb/817275/en-us
Permission on the file server structure
Because Offline Files also need to sync data from other (not currently logged on) users on the computer, the Offline File sync does not exclusively take place in the rights context of the logged on user. Thus, Offline Files require a set of minimum permissions to be able to sync (and in fact work) correctly. Not setting the permissions correctly on the file server will cause sync problems, can prevent the client from switching to online mode and a host of other problems (files created offline won’t sync back to the server).
The required minimum permissions for the share, the NTFS permissions of the shared folder and the permissions for each folder in the path are documented in: http://support.microsoft.com/kb/2512089/en-us.
Scopes andcachedfiles
Offline Files considers each share (\\Server\Share) as a scope. Each access to a network path will be checked against the list of scopes which Offline Files has partly or fully cached.
Note here that \\Server\Share is treated completely independently from \\Server.contoso.com\Share. As a result administrators must ensure that only the FQDN paths are used for drive mappings, redirections and DFSN paths.
If a match is found Offline files will be handling the request and among other things decide if the request should be satisfied from the Server (Online Mode) or from the local Cache (Offline mode).
Thus the Scope is the instance that decides if a paths is online or offline and it is always the complete share that is taken offline. That said, there are situations where only specific files are treated as offline for example when they are in a synchronization conflict. You can also configure a subset of a scope to be always offline and only synced during logon and logoff (this is called suspending and can be configured through http://gpsearch.azurewebsites.net/#2584).
If Offline Files treats a scope as offline, you can only use what is locally cached.
In Case you only made a subset of \\Server\Share available offline, you can only reach this subset of data in offline mode and you lose access to other branches that are not in the cache until you switch back to online mode. While offline greyed out icons identify data that is NOT cached but available as name in the Offline Files database only for consistency.
What part of a scope is offline available is very important to understand especially in case of DFS Namespaces in order to make meaningful decisions about how to design the directory structures on the server site.
It makes sense to host data that should be offline available in a different scope than the data that should not be offline available. Here it is common practice to host users offline available home shares in a separate DFS Namespace than not offline available Group shares.
Slowlink Mode and Background Sync.
With the policy “Configure slow-link mode” threshold values for latency and/or bandwidth can be defined. Those values can be applied to all (indicated by an ‘*’) or individual paths.
If for example the latency on the connection raises above the threshold value offline files will switch into offline state. See http://gpsearch.azurewebsites.net/#2093
When deploying Offline Files on DFS paths, it might make sense to defined very high latency values for the DFS-root so the root itself cannot switch to offline mode due to a slow link, but the deeper paths can. This is because the DFSN root share is treated by offline files in the same way as any other share that can go offline (slow connection).
For all the details see the blog of my colleague Ned Pyle:
http://blogs.technet.com/b/askds/archive/2011/12/14/slow-link-with-windows-7-and-dfs-namespaces.aspx
When the Client switches to offline (slow link) mode, the policy “configure background sync” governs if and how often the client will try to sync with the server which is important to still sync the data back to the server for backup and consistency:
See http://gpsearch.azurewebsites.net/#2095
It is not recommended to configure the background sync frequency to a very low value (minimum should be 30 minutes).
Migration of offline available data to other paths
Avoid this Scenario at all cost!
In http://support.microsoft.com/kb/977229/en-us two supported ways to migrate offline available data without reset of the client cache (and therefore resync) are described.
One way applies only to Offline Availability via Folder Redirection (automatic move done by the folder redirection service), the other to all other ways of Offline availability (manual by script).
Both ways have in common that you have to follow the KB Article to the letter if you want to have success and you must be up to date with all binaries on the client (see http://support.microsoft.com/kb/2820927).
As you can see from the first KB you have to follow a sequence of steps including Changes in AD/Fileserver/Infrastructure and on the Client.
From my experience this is seldom possible to do for a larger number of clients and it is very hard to automate.
Therefore the use of DFS for abstraction of logical paths and physical paths is strongly recommended.
In case you want to migrate offline available data on a DFS paths you just need to logoff the client, and change the paths the DFS Link is pointing to.
As the path used by offline files does not change, offline files won’t realize the migration as long as you preserved the filestate for example by using robocopy.
However when using Offline Files on DFS (or Roaming Profiles or DFSR) make sure you stay within the supported limits:
Also note here that the path change from NetBios name to FQDN name is also a full blown move as Offline Files treats these 2 although ending up at the same data as different paths. Here the ‘Verify Old and New folder redirection targets‘ policy mentioned in 2820927 has to be set.
Configurations & scenarios that should be avoided
Firstly the same set of data should not be made available offline in different ways!
For example: Folder Redirection with offline availability on the path \\Server\Share\Username\MyDocuments, in addition administrative assigned Offline Files on the path \\Server\Share\Username Here the subfolder MyDocuments has been made available offline in two different ways. This nested offline availability is something offline files doesn’t cope well with in my experience.
Another scenario is the use of several machines simultaneously by the same user.
For example, a laptop with offline availability and a desktop machine without offline availability.
In this scenario you have to ensure that the client using offline availability does not fall into offline mode for example due to SlowLink Mode. If it does, the user can inadvertently change a file on the server and in the local cache of the offline client, thus resulting in a sync conflict, the next time the laptop switches to online mode. The same is valid for data that is used by several users. These data are unfit to be used with Offline Files.Also, avoid redirecting the appdata directory. This can cause severe delays (depending on network performance) for applications on the client that expect that their data to be stored on a local disk!
I think it should be self-explanatory that before Offline Files are deployed to the clients, they should get the latest hotfixes for Offline Files (2820927), Fileservices (client and server 2899011) as well as the latest Security Updates installed. Furthermore a patch management is required to keep those components up to date.
On a related note, if you are planning for new client deployment, and exploring new solutions for user data offline access, I highly recommend you to look into the new feature “Work Folders” introduced in Windows Server 2012 R2 and Windows 8.1. To find more about the topic, see here https://technet.microsoft.com/en-us/library/dn265974.aspx.
I hope this helped you understanding offline files a bit better and enables you to build sound and solid concepts.
With best Regards
Hubert
Event-Tipp: Microsoft Dynamics-Lösungen auf der IT & Business (29.09. – 01.10.2015, Messe Stuttgart)
Microsoft wird auch dieses Jahr auf der IT & Business – Fachmesse für digitale Prozesse und Lösungen – mit einem Partnerstand ausstellen. Erleben Sie die Vertriebs-, Marketing-, Social- und Servicefunktionen von Dynamics CRM Online und Microsoft Social Engagement sowie maßgeschneiderte ERP-Lösungen, mit denen Sie Ihr gesamtes Unternehmen vernetzen und umfassend steuern können.
Lassen Sie sich zeigen, wie Sie die Vertriebsproduktivität Ihrer Mitarbeiter erhöhen, um einen stärkeren Fokus auf die richtigen Kunden und Prioritäten zu haben. Erleben Sie live, wie Sie Ihren Vertriebsmitarbeitern die Einblicke, Informationen und Werkzeuge zur Verfügung stellen, die sie benötigen, um die richtigen Kunden zu finden, die richtigen Abschlüsse zu erzielen und sich die Leistungsstärke des gesamten Unternehmens zunutze zu machen. Gewinnen Sie mit Cross-Channel-Service, produktiveren Mitarbeitern und flexiblen Servicemodellen langfristig treue Kunden.
Unsere Microsoft-Lösungsanbieter vor Ort beraten Sie individuell und zeigen Ihnen Technologielösungen speziell für Ihre Unternehmensanforderungen auf.
Folgende Microsoft-Partner stellen ihre Lösungen vor:
Premiumpartner
Infoman AG, http://www.infoman.de/
proMX, www.promx.net
Weitere Partner
alnamic AG, http://www.alnamic.com/
awisto, awis.to/itbusiness2015
aXon, http://www.axongmbh.de/
blue-zone, http://www.blue-zone.de/
ITVT, http://www.itvt.de/
ORBIS, http://www.orbis.de/
Kommen Sie zu unserem Messestand 1E41 – die Dynamics-Partner freuen sich auf Ihren Besuch!
Mehr über unsere Partner erfahren Sie auch hier im Web.
Event-Tipp: Microsoft Dynamics-Lösungen auf der IT & Business (29.09. – 01.10.2015, Messe Stuttgart)
Microsoft wird auch dieses Jahr auf der IT & Business – Fachmesse für digitale Prozesse und Lösungen – mit einem Partnerstand ausstellen. Erleben Sie die Vertriebs-, Marketing-, Social- und Servicefunktionen von Dynamics CRM Online und Microsoft Social Engagement sowie maßgeschneiderte ERP-Lösungen, mit denen Sie Ihr gesamtes Unternehmen vernetzen und umfassend steuern können.
Lassen Sie sich zeigen, wie Sie die Vertriebsproduktivität Ihrer Mitarbeiter erhöhen, um einen stärkeren Fokus auf die richtigen Kunden und Prioritäten zu haben. Erleben Sie live, wie Sie Ihren Vertriebsmitarbeitern die Einblicke, Informationen und Werkzeuge zur Verfügung stellen, die sie benötigen, um die richtigen Kunden zu finden, die richtigen Abschlüsse zu erzielen und sich die Leistungsstärke des gesamten Unternehmens zunutze zu machen. Gewinnen Sie mit Cross-Channel-Service, produktiveren Mitarbeitern und flexiblen Servicemodellen langfristig treue Kunden.
Unsere Microsoft-Lösungsanbieter vor Ort beraten Sie individuell und zeigen Ihnen Technologielösungen speziell für Ihre Unternehmensanforderungen auf.
Folgende Microsoft-Partner stellen ihre Lösungen vor:
Premiumpartner
Infoman AG, http://www.infoman.de/
proMX, www.promx.net
Weitere Partner
alnamic AG, http://www.alnamic.com/
awisto, awis.to/itbusiness2015
aXon, http://www.axongmbh.de/
blue-zone, http://www.blue-zone.de/
ITVT, http://www.itvt.de/
ORBIS, http://www.orbis.de/
Kommen Sie zu unserem Messestand 1E41 – die Dynamics-Partner freuen sich auf Ihren Besuch!
Mehr über unsere Partner erfahren Sie auch hier im Web.