Quantcast
Channel: TechNet Blogs
Viewing all 36188 articles
Browse latest View live

Top Contributors Awards! Beginners Guide to implement AJAX CRUD, Azure- Serverless messaging and much more!

$
0
0

Welcome back for another analysis of contributions to TechNet Wiki over the last week.

First up, the weekly leader board snapshot...

 

As always, here are the results of another weekly crawl over the updated articles feed.

 

Ninja Award Most Revisions Award
Who has made the most individual revisions

 

#1 Peter Geelen with 99 revisions.

 

#2 Dave Rendón with 84 revisions.

 

#3 Ken Cenerelli with 40 revisions.

 

Just behind the winners but also worth a mention are:

 

#4 karimSP with 34 revisions.

 

#5 AnkitSharma007 with 23 revisions.

 

#6 RajeeshMenoth with 22 revisions.

 

#7 [Kamlesh Kumar] with 9 revisions.

 

#8 Ramakrishnan Raman with 9 revisions.

 

#9 SYEDSHANU - MVP with 7 revisions.

 

#10 Richard Mueller with 6 revisions.

 

 

Ninja Award Most Articles Updated Award
Who has updated the most articles

 

#1 Ken Cenerelli with 34 articles.

 

#2 Peter Geelen with 32 articles.

 

#3 Dave Rendón with 19 articles.

 

Just behind the winners but also worth a mention are:

 

#4 karimSP with 18 articles.

 

#5 RajeeshMenoth with 13 articles.

 

#6 AnkitSharma007 with 6 articles.

 

#7 Luigi Bruno with 4 articles.

 

#8 Av111 with 3 articles.

 

#9 [Kamlesh Kumar] with 3 articles.

 

#10 Simon.Rech with 3 articles.

 

 

Ninja Award Most Updated Article Award
Largest amount of updated content in a single article

 

The article to have the most change this week was TechNet Guru Contributions - Azure, by Durval Ramos

This week's reviser was RajeeshMenoth

Says: One place to check all articles that participated in the TechNet Guru Contributions in the Azure category and here you can check which article was selected as winner. Nice article Durval 🙂

 

Ninja Award Longest Article Award
Biggest article updated this week

 

This week's largest document to get some attention is Beginners Guide to implement AJAX CRUD Operations using JQuery DataTables in ASP.NET MVC 5, by Ehsan Sajjad

This week's reviser was karimSP

Says: Do you know how to implement CRUD operations for a particular entity, check this nice article by Ehsan. Very informative and well written article 🙂

 

Ninja Award Most Revised Article Award
Article with the most revisions in a week

 

This week's most fiddled with article is Azure: Serverless messaging , by Steef-Jan Wiggers. It was revised 16 times last week.

This week's revisers were RajeeshMenoth, Dave Rendón, Peter Geelen & Ken Cenerelli

Says: Do you know what is Serverless messaging means ? Checkout this article by Steef-Jan. Awesome article with explained in detail. Good to read 🙂

 

Ninja Award Most Popular Article Award
Collaboration is the name of the game!

 

The article to be updated by the most people this week is Wiki Ninjas Blog Authoring Schedule - 2018, by pituach

Says: If you are Wiki Ninjas Blog author and not yet placed your name for Q2 2018, please go and add your name. Q2-2018 Schedule and we are waiting all authors to add their name. Go Go Ninjas 🙂

This week's revisers were John Naguib, karimSP, SYEDSHANU - MVP, Ken Cenerelli & [Kamlesh Kumar]

 

Ninja Award Ninja Edit Award
A ninja needs lightning fast reactions!

 

Below is a list of this week's fastest ninja edits. That's an edit to an article after another person

 

Ninja Award Winner Summary
Let's celebrate our winners!

 

Below are a few statistics on this week's award winners.

Most Revisions Award Winner
The reviser is the winner of this category.

Peter Geelen

Peter Geelen has been interviewed on TechNet Wiki!

Peter Geelen has featured articles on TechNet Wiki!

Peter Geelen has won 207 previous Top Contributor Awards. Most recent five shown below:

Peter Geelen has TechNet Guru medals, for the following articles:

Peter Geelen's profile page

Most Articles Award Winner
The reviser is the winner of this category.

Ken Cenerelli

Ken Cenerelli has been interviewed on TechNet Wiki!

Ken Cenerelli has featured articles on TechNet Wiki!

Ken Cenerelli has won 69 previous Top Contributor Awards. Most recent five shown below:

Ken Cenerelli has TechNet Guru medals, for the following articles:

Ken Cenerelli's profile page

Most Updated Article Award Winner
The author is the winner, as it is their article that has had the changes.

Durval Ramos

Durval Ramos has been interviewed on TechNet Wiki!

Durval Ramos has featured articles on TechNet Wiki!

Durval Ramos has won 20 previous Top Contributor Awards. Most recent five shown below:

Durval Ramos has TechNet Guru medals, for the following articles:

Durval Ramos's profile page

Longest Article Award Winner
The author is the winner, as it is their article that is so long!

Ehsan Sajjad

Ehsan Sajjad has won 6 previous Top Contributor Awards. Most recent five shown below:

Ehsan Sajjad has TechNet Guru medals, for the following articles:

Ehsan Sajjad has not yet had any interviews or featured articles (see below)

Ehsan Sajjad's profile page

Most Revised Article Winner
The author is the winner, as it is their article that has ben changed the most

Steef-Jan Wiggers

Steef-Jan Wiggers has been interviewed on TechNet Wiki!

Steef-Jan Wiggers has featured articles on TechNet Wiki!

Steef-Jan Wiggers has won 23 previous Top Contributor Awards. Most recent five shown below:

Steef-Jan Wiggers has TechNet Guru medals, for the following articles:

Steef-Jan Wiggers's profile page

Most Popular Article Winner
The author is the winner, as it is their article that has had the most attention.

pituach

pituach has been interviewed on TechNet Wiki!

pituach has featured articles on TechNet Wiki!

pituach has won 18 previous Top Contributor Awards. Most recent five shown below:

pituach has TechNet Guru medals, for the following articles:

pituach's profile page

Ninja Edit Award Winner
The author is the reviser, for it is their hand that is quickest!

Dave Rendón

Dave Rendón has won 16 previous Top Contributor Awards. Most recent five shown below:

Dave Rendón has TechNet Guru medals, for the following articles:

Dave Rendón has not yet had any interviews or featured articles (see below)

Dave Rendón's profile page

 

Says: Another great week from all in our community! Thank you all for so much great literature for us to read this week!
Please keep reading and contributing!

 

Best regards,
— Ninja [Kamlesh Kumar]

 


イノベーションの能力は生まれつきのものか【4/8 更新】

$
0
0

(この記事は2018 年1月10日にMicrosoft Partner Network blog に掲載された記事 Are innovators born or made?  の翻訳です。最新情報についてはリンク元のページをご参照ください。)

 

 

ダーウィンがガラパゴスゾウガメからヒントを得た「進化論」を初めて発表して以来、「生まれつきの才能と育つ環境はどちらが重要か」という議論が盛んになされてきました。進化論の発表から 150 年以上がたった今、その議論は企業文化にまで影響を及ぼしています。たとえば、英国の実業家リチャード・ブランソン氏や米国の起業家イーロン・マスク氏、米国のテレビ司会者オプラ・ウィンフリー氏の才能は天性のものなのでしょうか。それとも、後天的に身に付けたものなのでしょうか。今回はその両方の説を検討し、イノベーターに創造力を発揮してもらうにはどうすればよいのか、その力を 1 つに結集するうえで共同作業がどのような役割を果たすのかについて考えてみましょう。

 

 

先天的な要因

Forbes の記事 (英語) によると、イノベーターには生まれつきいくつかの特徴があります。

  • 人と違うことをする。イノベーターの考え方や行動は常識とは異なり、従来の規範に一石を投じるような環境や職場を作り出していきます。
  • リスクを楽しむ。従来どおりの方法や手っ取り早い方法は、イノベーターの目には入りません。多少リスクが大きくなっても、あえて複雑なソリューションを選ぶのがイノベーターです。
  • 変化とスピードが DNA に組み込まれている。イノベーターは決して現状に満足しません。何かを達成すれば、すぐに新しい目標を見つけます。スタートアップ企業が増え、市場シェアを争っているときには、その傾向が顕著になります。結果として、イノベーターは変化の波に上手に乗り、自分のアイデアの向かい風になる力を巧みにいなすことができます。

 

後天的な要因

イノベーションは一筋縄では成し遂げられません。リーダーは、次のような習慣を身に付けるための投資がどれほど重要か、理解しておく必要があります。

  • 集団意識を欠かさない。個人がすばらしいアイデアを持っていたとしても、全体が協力しなければそれを実現することはできません。イノベーターのスキルは、未来志向のチームの中で磨かれます。
  • 権力を捨て、全体の利益に尽くす。企業の中心でリーダーシップを発揮し、従業員の創造力を伸ばしてキャリア開発を後押しする人こそが、最高のイノベーターです。
  • すべての意見を尊重する。難しい問題に対するソリューションは、異なる複数の視点の中から生まれます。イノベーターは活発で生産性の高い議論が行われるようにリードし、成果を重視する企業文化を育みます。

 

結論: イノベーションの力はだれもが獲得できるもの

運動や健康的な食習慣を維持するのと同じように、創造力を鍛えるにも努力が必要です。Fortune の記事 (英語) によると、組織のアイデア創出を助けるコンサルタントのトッド・ヘンリー氏は、自身のクライアントに週 1 時間、座って考える時間を作るように勧めています。何物にも制限されずに思考を羽ばたかせることで、新たな発見にたどり着けるのだそうです。イノベーターが生まれ持った直感と環境要因とを組み合わせ、競争の中で差別化を図る方法として、これはすばらしい例と言えるでしょう。考える時間を作れば、イノベーターに生まれつき備わった「新しいものを生み出したい」という欲求に火を付けられます。

 

すべてのイノベーターに支援を

イノベーションへの意欲が湧いてきましたか? 方法がわかったら、後は目標に向かっていくためのプラットフォームを用意するだけです。マイクロソフトは世界中のパートナー様と連携して、皆様が革新的な製品を適切なユーザーに届けられるようお手伝いしています。私たちのミッションは、パートナー様とその顧客の皆様が共にビジネスの成長を実現するのをご支援することです。詳細については、マイクロソフト パートナー ネットワークの Web サイトをご覧ください。

 

SharePoint: The complete guide to user profile cleanup – Part 1

$
0
0

Part 1: High-level Concepts

Throughout the lifetime of SharePoint, there have been several changes to how user profiles are imported and how they are automatically cleaned up when they fall out of scope. In upcoming posts, I will cover them all, for each currently-supported version of the product. However, I think it's best to cover a few high-level topics that apply to all versions and all profile sync methods first.

 

Out of Scope:

When I say that a user is "out-of-scope" for profile sync, it simply means that the current sync configuration excludes them, which can be due to one of the following factors:

  • OU / Container Selection: the user does not exist in the containers you have selected for sync.
  • Connection Filters: The current sync connection filters exclude the user.
  • Deletion: The user has been deleted from Active Directory or whatever directory store you're using.

 

Unmanaged Profiles:

Unmanaged profiles are simply that: user profiles that exist in the User Profile Service Application (UPA), but are not being managed by the sync. These are also known as "stub" profiles, or "non-imported" profiles. Typically, the reason that these profiles are "unmanaged" is because they are "out-of-scope" for the sync for one of the reasons above. I covered this topic in detail in a previous post here: https://blogs.technet.microsoft.com/spjr/2017/09/24/sharepoint-all-about-non-imported-user-profiles/

 

Cleanup process:

Assuming we're talking about managed profiles (those imported by the Profile Sync), the process goes like this:

  • The Sync imports the user profile.
  • Later the user fall "out-of-scope" for one reason or another as covered above.
  • At this point, the Sync marks the profile for deletion, but does not actually delete anything. The profile should show in the "Profiles Missing from Import" view in the UPA | Manage User Profiles.
  • The "My Site Cleanup Job" timer job processes the profiles that are marked for deletion and actually deletes them.

 

Depending on the version and import method used, there are different factors in play and issues to be aware of. To avoid one gigantic and confusing post, I have chosen to split these topics out by version. The links will become active as I publish the content.

Part 2: SharePoint 2010

Part 3: SharePoint 2013

Part 4: SharePoint 2016

SharePoint: The complete guide to user profile cleanup – Part 2 – 2010

$
0
0

This is part 2 in a series. You can find part 1 here:

SharePoint: The complete guide to user profile cleanup – Part1

 

Sync Options:

Profile Synchronization (AKA: "FIM Sync")

In SharePoint 2010, you really only have one option, and that's to use "SharePoint Profile Synchronization" AKA: "FIM Sync". This is where we use a custom build of Forefront Identity Manager 2010 (FIM) built into SharePoint 2010 to sync user profiles.

There is a radio button in the UPA for "Enable External Identity Provider", but I'm not sure that was ever used, or ever really worked.

 

 

Step 1: Determine if the profile is already marked for deletion.

Run this SQL query against the Profile database:

Select * from userprofile_full where bDeleted = 1

 

If your target profiles are in the results, that means they are already marked for deletion. All you should need to do is run the My Site Cleanup Job.

Note: Managed profiles marked for deletion should also show in Central Admin | Your UPA | Manage User Profiles | Profiles Missing from Import.

 

 

Step 2: Determine if the profile is managed or unmanaged.

Run the following PowerShell to get a list of all your unmanaged profiles:

$upa = Get-spserviceapplication | ?{$_.typename -match "profile"}

Set-SPProfileServiceApplication $upa -GetNonImportedObjects $true | out-file c:tempNonImportedProfiles.txt

 

If the target profiles show up in the "NonImportedProfiles.txt" file, then you need to manually mark them for deletion with PowerShell:

$upa = Get-spserviceapplication | ?{$_.typename -match "profile"}

Set-SPProfileServiceApplication $upa -PurgeNonImportedObjects $true

 

If the target profiles are managed profiles and not marked for deletion, then you need to look into why the Sync is not marking them for deletion.

Document your Sync connection filters and selected OUs / containers and check your target profiles against them.

Take a look a the FIM Client (miiscleint.exe) on the server running the Synchronization service. Detailing exactly what to look for in the FIM client is beyond the scope of this blog post, but generally speaking, if you have entire Sync steps that are failing, that's likely the problem.

 

Step 3: Run a Full Sync.

If you've made recent changes to your Sync connection filters or AD container selection, it takes a Full Sync to apply those changes to all profiles. Also, an Incremental Sync only gets one shot at updating a profile. If something went wrong during the Incremental that ran right after the user fell out-of-scope (deleted from AD, etc), that change is missed. If the user object in AD does not change again, the Incremental will not attempt to pull that user in again. Therefore, a failure during a single run of the Sync could cause the profile to never be processed. For this reason, we recommend that you run a Full Sync on some type of recurring schedule. The interval is up to you, but something between once a week and once a month should work. There is no way to schedule a Full Sync in the UI, but you can accomplish the same thing with a Windows Scheduled Task and this PowerShell:

$siteUrl="http://yourCentralAdminSiteHere/" #Any site associated with target UPA

$site= New-Object Microsoft.SharePoint.SPSite($siteUrl)

$serviceContext = [Microsoft.SharePoint.SPServiceContext]::GetContext($site)

$configManager = New-Object Microsoft.Office.Server.UserProfiles.UserProfileConfigManager($serviceContext)

$configManager.StartSynchronization($true)

 

If the target profiles have been deleted in Active Directory, but the Sync is not marking them for deletion, the Active Directory Recycle bin may be in play as documented here: https://blogs.technet.microsoft.com/spjr/2018/03/07/sharepoint-2010-2013-fim-sync-does-not-remove-profiles-for-users-that-were-deleted-from-ad/

 

 

Step 4: My Site Cleanup Job

While the Sync marks out-of-scope profiles for deletion, it doesn't actually delete anything. That's left to the My Site Cleanup Job.

Check Central Administration | Monitoring | Timer Jobs | Review Job Definitions | My Site Cleanup Job. Make sure it's set to run at least once per day (default in SharePoint 2010 is hourly).

If the target user profiles are marked for deletion (bDeleted =1), and the Mysite Cleanup timer job is running, but the profiles are not being deleted, then there is some problem with the timer job. You should review the SharePoint ULS logs from the server that ran the job, covering the timeframe when the job ran.

 

SharePoint: The complete guide to user profile cleanup – Part 3 – 2013

$
0
0

This is part 3 in a series. You can find other parts here:

SharePoint: The complete guide to user profile cleanup – Part1

SharePoint: The complete guide to user profile cleanup – Part 2 – 2010

SharePoint: The complete guide to user profile cleanup – Part 4 – 2016

 

Sync Options:

In SharePoint 2013, you have two options. You can use "SharePoint Profile Synchronization" AKA: "FIM Sync". This is where we use a custom build of Forefront Identity Manager 2010 (FIM) built into SharePoint 2013 to sync user profiles. You can also use another option that was introduced in SharePoint 2013 called Active Directory Import (aka: "AD Import", "ADI"). For some details about the differences and switching between the two import types, see my previous post here: https://blogs.technet.microsoft.com/spjr/2017/08/14/sharepoint-considerations-when-switching-from-fim-sync-to-ad-import/

 

Active Directory Import (aka: ADI)

 

ADI Step 1: Determine if the profile is already marked for deletion.

Run this SQL query against the Profile database:

Select * from userprofile_full where bDeleted = 1

 

If your target profiles are in the results, that means they are already marked for deletion. All you should need to do is run the My Site Cleanup Job.

Note: Managed profiles marked for deletion should also show in Central Admin | Your UPA | Manage User Profiles | Profiles Missing from Import.

 

ADI Step 2: Run a Full Import.

"Out of Scope" (deleted, filtered, moved to a non-imported OU) users do not have their profiles automatically cleaned up by an incremental import.  With AD Import, we don't use the Sync database to store "state" information about each user.  As such, the only way AD Import can tell if a user has fallen "out of scope" is to import them.  If the user object has not changed in AD, an incremental import will not pick them up.  Luckily, AD Import is fast, so running a Full Import is not a big deal.  For more on this, see my colleagues post on the subject: https://blogs.msdn.microsoft.com/spses/2014/04/13/sharepoint-2013-adimport-is-not-cleaning-up-user-profiles-in-sharepoint-whose-ad-accounts-are-disabled/

 

ADI Step 3: Mark non-imported profiles for deletion.

Run the following PowerShell to get a list of all your unmanaged profiles:

$upa = Get-spserviceapplication | ?{$_.typename -match "profile"}

Set-SPProfileServiceApplication $upa -GetNonImportedObjects $true | out-file c:tempNonImportedProfiles.txt

 

If the target profiles show up in the "NonImportedProfiles.txt" file, then you need to manually mark them for deletion with PowerShell:

$upa = Get-spserviceapplication | ?{$_.typename -match "profile"}

Set-SPProfileServiceApplication $upa -PurgeNonImportedObjects $true

 

If the target profiles are managed profiles, not marked for deletion, and you have run a Full Import, then you need to look into why AD Import is not marking them for deletion.

Document your connection filter and selected OUs / containers and check your target profiles against them. If you're using a complex LDAP filter on your import connection, you should consider using an LDAP tool like LDP.exe or LDAP Browser to test the LDAP filter and make sure it includes and excludes the users you think it should.

 

ADI Step 4: My Site Cleanup Job

While "Set-SPProfileServiceApplication $upa -PurgeNonImportedObjects $true" marks out-of-scope profiles for deletion, it doesn't actually delete anything. That's left to the My Site Cleanup Job.

Check Central Administration | Monitoring | Timer Jobs | Review Job Definitions | My Site Cleanup Job. Make sure it's set to run at least once per day (default in SharePoint 2013 is once daily).

If the target user profiles are marked for deletion (bDeleted =1), and the Mysite Cleanup timer job is running, but the profiles are not being deleted, then there is some problem with the timer job. You should review the SharePoint ULS logs from the server that ran the job, covering the timeframe when the job ran.

 

 

 

 

Profile Synchronization (aka: "FIM Sync")

Note: This section is identical to my SharePoint 2010 post because with FIM Sync, there is no difference in profile cleanup between the two versions: SharePoint: The complete guide to user profile cleanup – Part 2 – 2010

 

FIM Sync Step 1: Determine if the profile is already marked for deletion.

Run this SQL query against the Profile database:

Select * from userprofile_full where bDeleted = 1

 

If your target profiles are in the results, that means they are already marked for deletion. All you should need to do is run the My Site Cleanup Job.

Note: Managed profiles marked for deletion should also show in Central Admin | Your UPA | Manage User Profiles | Profiles Missing from Import.

 

 

FIM Sync Step 2: Determine if the profile is managed or unmanaged.

Run the following PowerShell to get a list of all your unmanaged profiles:

$upa = Get-spserviceapplication | ?{$_.typename -match "profile"}

Set-SPProfileServiceApplication $upa -GetNonImportedObjects $true | out-file c:tempNonImportedProfiles.txt

 

If the target profiles show up in the "NonImportedProfiles.txt" file, then you need to manually mark them for deletion with PowerShell:

$upa = Get-spserviceapplication | ?{$_.typename -match "profile"}

Set-SPProfileServiceApplication $upa -PurgeNonImportedObjects $true

 

If the target profiles are managed profiles and not marked for deletion, then you need to look into why the Sync is not marking them for deletion.

Document your Sync connection filters and selected OUs / containers and check your target profiles against them.

Take a look a the FIM Client (miiscleint.exe) on the server running the Synchronization service. Detailing exactly what to look for in the FIM client is beyond the scope of this blog post, but generally speaking, if you have entire Sync steps that are failing, that's likely the problem.

 

FIM Sync Step 3: Run a Full Sync.

If you've made recent changes to your Sync connection filters or AD container selection, it takes a Full Sync to apply those changes to all profiles. Also, an Incremental Sync only gets one shot at updating a profile. If something went wrong during the Incremental that ran right after the user fell out-of-scope (deleted from AD, etc), that change is missed. If the user object in AD does not change again, the Incremental will not attempt to pull that user in again. Therefore, a failure during a single run of the Sync could cause the profile to never be processed. For this reason, we recommend that you run a Full Sync on some type of recurring schedule. The interval is up to you, but something between once a week and once a month should work. There is no way to schedule a Full Sync in the UI, but you can accomplish the same thing with a Windows Scheduled Task and this PowerShell:

$siteUrl="http://yourCentralAdminSiteHere/" #Any site associated with target UPA

$site= New-Object Microsoft.SharePoint.SPSite($siteUrl)

$serviceContext = [Microsoft.SharePoint.SPServiceContext]::GetContext($site)

$configManager = New-Object Microsoft.Office.Server.UserProfiles.UserProfileConfigManager($serviceContext)

$configManager.StartSynchronization($true)

 

If the target profiles have been deleted in Active Directory, but the Sync is not marking them for deletion, the Active Directory Recycle bin may be in play as documented here: https://blogs.technet.microsoft.com/spjr/2018/03/07/sharepoint-2010-2013-fim-sync-does-not-remove-profiles-for-users-that-were-deleted-from-ad/

 

FIM Sync Step 4: My Site Cleanup Job

While the Sync marks out-of-scope profiles for deletion, it doesn't actually delete anything. That's left to the My Site Cleanup Job.

Check Central Administration | Monitoring | Timer Jobs | Review Job Definitions | My Site Cleanup Job. Make sure it's set to run at least once per day (default in SharePoint 2013 is once per day).

If the target user profiles are marked for deletion (bDeleted =1), and the Mysite Cleanup timer job is running, but the profiles are not being deleted, then there is some problem with the timer job. You should review the SharePoint ULS logs from the server that ran the job, covering the timeframe when the job ran.

 

SharePoint: The complete guide to user profile cleanup – Part 4 – 2016

$
0
0

 

This is part 4 in a series. You can find other parts here:

SharePoint: The complete guide to user profile cleanup – Part1

SharePoint: The complete guide to user profile cleanup – Part 2 – 2010

SharePoint: The complete guide to user profile cleanup – Part 3 – 2013

 

Sync Options:

In SharePoint 2016, you have two options. Like SharePoint 2013, you can also use Active Directory Import (aka: "AD Import", "ADI"). You also have the option of using an "External Identity Manager". In most cases, this will be Microsoft Identity Manager 2016 (aka: MIM), which is the successor to Forefront Identity Manager (FIM).

 

Active Directory Import (aka: ADI)

 

ADI Step 1: Determine if the profile is already marked for deletion.

Run this SQL query against the Profile database:

Select * from upa.userprofile_full where bDeleted = 1

 

If your target profiles are in the results, that means they are already marked for deletion. All you should need to do is run the My Site Cleanup Job. See step 4 below.

Note: Managed profiles marked for deletion should also show in Central Admin | Your UPA | Manage User Profiles | Profiles Missing from Import.

 

ADI Step 2: Run a Full Import.

"Out of Scope" (deleted, filtered, moved to a non-imported OU) users do not have their profiles automatically cleaned up by an incremental import.  With AD Import, we don't use the Sync database to store "state" information about each user.  As such, the only way AD Import can tell if a user has fallen "out of scope" is to import them.  If the user object has not changed in AD, an incremental import will not pick them up.  Luckily, AD Import is fast, so running a Full Import is not a big deal.  For more on this, see my colleagues post on the subject: https://blogs.msdn.microsoft.com/spses/2014/04/13/sharepoint-2013-adimport-is-not-cleaning-up-user-profiles-in-sharepoint-whose-ad-accounts-are-disabled/

 

ADI Step 3: Mark non-imported profiles for deletion.

Run the following PowerShell to get a list of all your unmanaged profiles:

$upa = Get-spserviceapplication | ?{$_.typename -match "profile"}

Set-SPProfileServiceApplication $upa -GetNonImportedObjects $true | out-file c:tempNonImportedProfiles.txt

 

If the target profiles show up in the "NonImportedProfiles.txt" file, then you need to manually mark them for deletion with PowerShell:

$upa = Get-spserviceapplication | ?{$_.typename -match "profile"}

Set-SPProfileServiceApplication $upa -PurgeNonImportedObjects $true

 

If the target profiles are managed profiles, not marked for deletion, and you have run a Full Import, then you need to look into why AD Import is not marking them for deletion.

Document your connection filter and selected OUs / containers and check your target profiles against them. If you're using a complex LDAP filter on your import connection, you should consider using an LDAP tool like LDP.exe or LDAP Browser to test the LDAP filter and make sure it includes and excludes the users you think it should.

 

ADI Step 4: My Site Cleanup Job

While "Set-SPProfileServiceApplication $upa -PurgeNonImportedObjects $true" marks out-of-scope profiles for deletion, it doesn't actually delete anything. That's left to the My Site Cleanup Job.

Check Central Administration | Monitoring | Timer Jobs | Review Job Definitions | My Site Cleanup Job. Make sure it's set to run at least once per day.

Important: In SharePoint 2016, there were some major changes made to how the My Site Cleanup Job works. Instead of immediately deleting profiles that are marked for deletion, it schedules the profiles to be deleted after 30 days. The 30-day setting is hard-coded. There is no way to change it. Also, if your build is pre-August 2017 CU (16.0.4573.1002), this functionality does not work at all, even after 30 days. See this post for details: https://blogs.msdn.microsoft.com/spses/2017/05/22/sharepoint-2016-mysitecleanup-job-functionality-changes/

If for some reason you can't wait 30 days to get rid of these profiles, then you'll have to delete them via PowerShell script. My colleague Adam has a nice option for doing that here: https://blogs.technet.microsoft.com/adamsorenson/2018/02/20/deleting-user-profiles-using-powershell/

I've also added my own take on this, which is slightly more automated as you don't have to prepare the input file. Instead it just deletes all profiles that are bDeleted = 1 in the upa.userprofile_Full table of the Profile database:

# BDeletedCleanup.ps1

# This PowerShell script is provided "as-is" with no warranties expressed or implied. Use at your own risk.

# Please back up your UPA databases before running this.

# This script will access the UPA associated to the web application given and delete all the user profiles that are marked for deletion

# It will delete all the user profiles that have the BDeleted flag set to 1

# NOTE: It only works as-is when there is a single UPA in the farm. If you have multiple, you'll need to update where $upa is set.

# Only one value that needs to be updated below, the $webapp variable.

 

# Update the web application with one that is associated to the UPA

$webapp = "http://www.contoso.com"

asnp *sharepoint*

# Determine SharePoint version so we run the right SQL query:

$build = get-spfarm | select buildversion

if($build.BuildVersion -gt 16.0.0000.0000)

{$is2016 = $true}

else {$is2016 = $false}

# SQL Query function.

function Run-SQLQuery ($ConnectionString, $SqlQuery)

{$SqlConnection = New-Object System.Data.SqlClient.SqlConnection

$SqlConnection.ConnectionString = $ConnectionString

$SqlCmd = New-Object System.Data.SqlClient.SqlCommand

$SqlCmd.CommandText = $SqlQuery

$SqlCmd.Connection = $SqlConnection

$SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter

$SqlAdapter.SelectCommand = $SqlCmd

$DataSet = New-Object System.Data.DataSet

$SqlAdapter.Fill($DataSet)

$SqlConnection.Close()

$DataSet.Tables[0]

}

# Declaring the SQL connection String and running the SQL query to gather profiles marked bDeleted.

$upa = Get-SPServiceApplication | where {$_.TypeName -eq "User Profile Service Application"}

$propData = $upa.GetType().GetProperties([System.Reflection.BindingFlags]::Instance -bor [System.Reflection.BindingFlags]::NonPublic)

$profDatabase = ($propData | where {$_.Name -eq "ProfileDatabase"})

$prof = $profDatabase.GetValue($upa)

$connStr = $prof.DatabaseConnectionString

If ($is2016)

{$inputFile = Run-SQLQuery -ConnectionString $connStr -SqlQuery "SELECT [NTName] FROM upa.UserProfile_Full where bDeleted = 1"}

Else {$inputFile = Run-SQLQuery -ConnectionString $connStr -SqlQuery "SELECT [NTName] FROM UserProfile_Full where bDeleted = 1"}

# Declaring the Sharepoint Variables.

$site = new-object Microsoft.SharePoint.SPSite($webapp);

$ServiceContext = [Microsoft.SharePoint.SPServiceContext]::GetContext($site);

$pm = new-object Microsoft.Office.Server.UserProfiles.UserProfileManager($ServiceContext)

# Declaring and creating the log files. Each time the script is executed, a new file will be created with the current time in the filename.

$dateTime=Get-Date -format "dd-MMM-yyyy HH-mm-ss"

$UPLogFile="UserProfiles_Remove_bdeleted"+"_"+ $dateTime + ".log"

$inputFile | Foreach-Object($_){

$User=$_.ntname

if($User -ne $null)

{

Write-Host "User Name: $User"

try

{

$profile = $pm.GetUserProfile($User)

$DisplayName = $profile.DisplayName

Write-host "Current User:" $DisplayName

$messageDisplayname = "Current User: " + $DisplayName

Add-Content -Path $UPLogFile -Value $messageDisplayname

$AccountName = $profile[[Microsoft.Office.Server.UserProfiles.PropertyConstants]::AccountName].Value

$id=$profile.ID

Write-host "ID for the user : " $DisplayName " is " $id

$messageid = "ID for the user : " + $DisplayName + " is " + $id

Add-Content -Path $UPLogFile -Value $messageid

Write-host "Removing the Profile.."

$messageremove = "Removing the Profile.."

Add-Content -Path $UPLogFile -Value $messageremove

try

{

$pm.RemoveUserProfile($id)

Write-host "Successfully Removed the Profile " $AccountName

$messagesuccess = "Successfully Removed the Profile " + $AccountName

Add-Content -Path $UPLogFile -Value $messagesuccess

Add-Content -Path $UPLogFile -Value " "

}

catch

{

Write-host "Failed to remove the profile " $AccountName

$messagefail = "Failed to remove the profile " + $AccountName

Add-Content -Path $UPLogFile -Value $messagefail

Add-Content -Path $UPLogFile -Value " "

}}

catch

{

Write-host "Exception when handling the user $User - $($Error[0].ToString())"

$messageexcp = "Exception when handling the user " + $User

Add-Content -Path $UPLogFile -Value $messageexcp

Add-Content -Path $UPLogFile -Value " "

}}}

 

 

External Identity Manager (aka: "MIM Sync")

 

MIM Sync Step 1: Determine if the profile is already marked for deletion.

Run this SQL query against the Profile database:

Select * from upa.userprofile_full where bDeleted = 1

 

If your target profiles are in the results, that means they are already marked for deletion. All you should need to do is run the My Site Cleanup Job. See step 4 below.

Note: Managed profiles marked for deletion should also show in Central Admin | Your UPA | Manage User Profiles | Profiles Missing from Import.

 

 

MIM Sync Step 2: Determine if the profile is managed or unmanaged.

Run the following PowerShell to get a list of all your unmanaged profiles:

$upa = Get-spserviceapplication | ?{$_.typename -match "profile"}

Set-SPProfileServiceApplication $upa -GetNonImportedObjects $true | out-file c:tempNonImportedProfiles.txt

 

If the target profiles show up in the "NonImportedProfiles.txt" file, then you need to manually mark them for deletion with PowerShell:

$upa = Get-spserviceapplication | ?{$_.typename -match "profile"}

Set-SPProfileServiceApplication $upa -PurgeNonImportedObjects $true

 

If the target profiles are managed profiles and not marked for deletion, then you need to look into why the Sync is not marking them for deletion.

Document your Sync connection filters and selected OUs / containers and check your target profiles against them.

Take a look at the MIM Client (miiscleint.exe) on your MIM server. Detailing exactly what to look for in the MIM client is beyond the scope of this blog post, but generally speaking, if you have entire Sync steps that are failing, that's likely the problem.

 

MIM Sync Step 3: Run a Full Sync.

If you've made recent changes to your Sync connection filters or AD container selection, it takes a Full Sync to apply those changes to all profiles. Also, an Incremental Sync only gets one shot at updating a profile. If something went wrong during the Incremental that ran right after the user fell out-of-scope (deleted from AD, etc), that change is missed. If the user object in AD does not change again, the Incremental will not attempt to pull that user in again. Therefore, a failure during a single run of the Sync could cause the profile to never be processed. For this reason, we recommend that you run a Full Sync on some type of recurring schedule. The interval is up to you, but something between once a week and once a month should work.

 

MIM Sync Step 4: My Site Cleanup Job

This step is exactly the same as the "ADI Step 4: My Site Cleanup Job" section above. See that.

締め切りまであと7営業日!2018 MPN Partner of the Year Awards 【4/9 更新】

$
0
0

先日ご案内した 2018 MPN Partner of the Year Awards の締め切りまであと7営業日です。

応募のご準備は進めていただいておりますでしょうか?

これからご準備される方は、以下より応募についてご確認いただき、締め切りまでにAward Submission Toolよりご提出ください。

 

応募締切: 2018 年 4月 17 日 23:59 (太平洋標準時間、日本時間 4 月 18 日 15:59) まで

(All nominations must be in the tool by 11:59pm PST, April 17, 2018. No Exceptions.)

 

2018 MPN Partner of the Year Awards の詳細はこちらから

 

 

 

 

Azure の監視サービスの新たな購入方法をご紹介

$
0
0

執筆者: Shiva Sivakumar (Director of Progam Management, Azure Monitoring & Diagnostics)

このポストは、2018 4 3 日に投稿された Introducing a new way to purchase Azure monitoring services の翻訳です。

 

今日では、さまざまな組織のお客様が自社の重要なワークロードを常に正常に稼働させるために、Azure のアプリケーション、インフラストラクチャ、ネットワークの監視機能を活用しています。これらのサービスは目を見張るほど成長を遂げていることから、お客様は複数の監視サービスを利用して、問題の早期発見、早期解決に取り組まれています。マイクロソフトは、お客様による Azure の監視サービス導入をさらにスムーズにするために、この監視サービスの新たな購入方法を提供することにしました。このサービスの新料金モデルには、以下の 3 つの特長があります。

1. 従量課金制

監視サービスのポートフォリオ全体にシンプルな「従量課金制」モデルが適用されます。お客様による完全な管理と把握のもと、サービスを使用した分のみ料金をお支払いいただきます。

2. データ収集量ベースのギガバイト単位 (GB) での課金

料金モデルが「ノード単位」からデータ収集量に基づく「ギガバイト単位」に変更されます。監視機能の価値は、ノード数よりも収集したデータ量とそこから抽出されたインサイトにあるというご意見をお客様からいただき、それを反映したものとなります。また、普及が進むコンテナーやマイクロサービスではノードの定義があいまいになりつつあることから、新料金モデルではこうした状況への対応も考慮しました。データ収集量に基づく「ギガバイト単位」のこの新料金モデルは、アプリケーション、インフラストラクチャ、ネットワークのいずれの監視機能にも適用されます。

3. 既存のお客様は料金モデルを選択可能

既にサービスをご利用いただいているお客様の中には、これまでの「ノード単位」モデルを維持したいと望まれる場合があると理解しています。そうしたお客様に今回の変更の影響が及ばないよう、既存のお客様には、引き続きノード単位の料金モデルをお選びいただけるようにしました。現在 Operations Management Suite のライセンスをお持ちのお客様は、引き続き従来の料金モデルをご利用いただくことも、更新時に新料金モデルに変更していただくことも可能です。

新料金モデルは、本日からすべてのお客様にご利用いただけます。この変更点の多くは、マイクロソフトとコミュニティの皆様とが協力した結果、実現したものです。Azure のインフラストラクチャ、アプリケーション、ネットワークの監視機能の構築に関し、継続的なフィードバックにご協力いただきありがとうございました。

新料金モデルの詳細については、料金計算ツール、および各製品の料金ページ (Log Analytics)(Network Watcher)(Azure Monitor)(Application Insights) でご確認ください。

データ収集の管理については、Log AnalyticsApplication Insights の各ドキュメントをお読みください。Azure BackupAzure Security CenterAzure Site Recovery をはじめとする Azure のセキュリティおよび管理ポートフォリオのサービスについては、料金変更はありません。

 


柔軟性が強化された Azure SQL Database の新しい購入モデル

$
0
0

執筆者: Alexander (Sasha) Nosov (Principal Program Manager, Azure SQL Database)

このポストは、2018 4 4 日に投稿された A flexible new way to purchase Azure SQL Database の翻訳です。

 

このたびマイクロソフトは、Azure SQL Database でエラスティック プールおよび単一データベースのデプロイメント オプションを選択できる、新しい購入モデルのプレビューを開始しました。先日、vCore 数に基づくモデルの SQL Database Managed Instance 発表し、お客様の選択肢を拡充する取り組みを進めた結果、柔軟性、制御性、透明性が向上しました。Managed Instance と同様、エラスティック プール オプションと単一データベース オプションでも、vCore 数に基づくモデルで SQL Server に Azure Hybrid Benefit を適用するとコストを最大 30%* 削減できます。

Azure SQL Database

2 つの新たなサービス レベルの追加で、柔軟性とパフォーマンスを最適化

vCore 数に基づく新しいモデルでは、General Purpose Business Critical 2 つのサービス レベルが導入されました。これらのサービス レベルでは、コンピューティングとストレージの構成の定義や制御を独立で行い、お客様のアプリケーションの要件に合わせて厳密に最適化することができます。クラウドに移行する場合は、このモデルを使用すればオンプレミスのワークロードの要件をクラウドにそのまま変換することができます。General Purpose は、ほとんどのビジネス ワークロードに対応する設計となっており、経済性が高くバランスの良いスケーラブルなコンピューティング オプションとストレージ オプションを使用できます。Business Critical は、I/O の要件が厳しいビジネス アプリケーション向けの設計で、障害発生時の回復性がきわめて高くなっています。

DTU に基づくモデルと vCore 数に基づくモデルの間で、パフォーマンス レベルを選択可能

ワークロードに最適なオプションを自由に選択していただけるよう、新しい vCore 数に基づくモデルの他に、DTU に基づくモデルにも対応しました。リソースの購入や構成を簡単に行うのに適しているのは、DTU に基づくモデルです。このモデルでは、事前構成済みのリソース バンドルを幅広いパフォーマンス オプションで利用できます。基盤となるリソースをカスタマイズせず、わかりやすい月額固定料金で使用したい場合もこのモデルが適しています。一方、基盤となるリソースの情報を詳細に把握し、スケーリングを個別に実行してパフォーマンスを最適化する場合は、vCore 数に基づくモデルが適しています。また、既に所有している SQL Server のライセンスをクラウドに移行する場合にも vCore 数に基づくモデルが適しています。DTU に基づくモデルと vCore 数に基づくモデルの間の移行は、Standard サービス レベルから Premium サービス レベルにアップグレードするときと同様に、オンラインで簡単に行えます。

SQL Server Azure Hybrid Benefit を適用すると vCore 数に基づくモデルでコストを最大 30%* 削減可能

SQL Server に Azure Hybrid Benefit を適用すると、vCore 数に基づくモデルでコストをさらに削減できます。この特典は Azure 独自のもので、ソフトウェア アシュアランスが適用されている SQL Server Enterprise Edition または Standard Edition のライセンスを使用すると、vCore 数に基づくモデルで単一データベース、エラスティック プール、Managed Instance のコストを最大 30% 削減できます。

使用を開始するには

vCore 数に基づくモデルの新しいサービス レベルは、2018 4 6 日からすべての Azure リージョンでご利用いただけます。既に Azure SQL Database をご利用中のお客様は、下の図に示す方法でポータルから新しいサービス レベルに切り替え、データベースを構成することができます。また、General Purpose サービス レベルまたは Business Critical サービス レベルでデータベースを新規作成すること可能です。

Bucharest (1)

Bucharest (2)

vCore 数に基づく購入モデルの詳細については、サービス レベルのドキュメント料金ページをご覧ください。

*米国東部リージョンで vCore 8 基の Business Critical Managed Instance 730 時間/月使用した場合の削減額。通常価格 (ライセンスを含む) と、SQL Server Enterprise Edition のソフトウェア アシュアランスの料金を含む割引価格 (SQL Server Azure Hybrid Benefit を適用した場合) に基づいて算出しました。実際の削減額は、リージョン、料金レベル、ソフトウェア アシュアランスのレベルにより異なります。2017 12 月時点の料金は、今後変更される可能性があります。

 

Troubleshooting Using Ring Buffers

$
0
0

Working at a customer site, and I happened to notice that the Exception ring buffer contained a lot (and I mean A.L.O.T.) of errors. Ring Buffers were introduced in SQL 2008, and contain a fixed size cycle of records for the ring buffer they pertain to. Now… note – sys.dm_os_ring_buffers is a DMV which is NOT SUPPORTED. and future compatibility is NOT guaranteed… But… that doesn’t mean we can’t use them – right Winking smile

So the ring buffers available are :-

[sql]SELECT DISTINCT ring_buffer_type FROM sys.dm_os_ring_buffers[/sql]

Which gives me:

image

For this Blog Post, I am particularly interested in the RING_BUFFER_EXCEPTION ring buffer, as this will show me the exceptions which have been occurring across the instance. Since these exceptions might only be a severity of 15 or below, it is entirely possible that we don’t know they are occurring – unless we look in the ring buffer.

But how?

If I cast the ring buffer record as an XML datatype, I can see the format of the payload:

image

So, I really need to break this XML down so I can do something useful with it, and whilst I am at it, I can use the sys.messages table to turn the Error (<Error>926</Error> in this case) into something slightly more meaningful, with:

[sql]SELECT<br>
DATEADD(ms, -1 * (info.ms_ticks - XEvent.value('(@time)[1]', 'INT')), GETDATE()) AS [TimeStamp]<br>
, XEvent.value('(@id)[1]', 'INT') AS [ID]<br>
, XEvent.value('(./Exception/Error)[1]', 'INT') AS [Error]<br>
, XEvent.value('(./Exception/Severity)[1]', 'INT') AS [Severity]<br>
, XEvent.value('(./Exception/State)[1]', 'INT') AS [State]<br>
, XEvent.value('(./Exception/UserDefined)[1]', 'INT') AS [UserDefined]<br>
, sm.text<br>
FROM (<br>
SELECT<br>
CAST(record AS XML) AS [target_data]<br>
FROM<br>
sys.dm_os_ring_buffers<br>
WHERE<br>
ring_buffer_type = 'RING_BUFFER_EXCEPTION' ) as XEventsRingBuffer<br>
CROSS APPLY target_Data.nodes('/Record') AS XEventData (XEvent)<br>
INNER JOIN sys.messages sm<br>
ON sm.message_id = XEvent.value('(./Exception/Error)[1]', 'INT')<br>
CROSS APPLY sys.dm_os_sys_info info[/sql]

Resulting in:

image

Ok – that’s cool. So I can now use the ring buffer to extract the XML of the exception and turn it into something a bit more human readable.

What if I need to investigate further? What if the Exception leads me to something not so straight-forward? Well, XEvents to the rescue:

[sql]-- If the Event Sessions already Exists, drop it<br>
IF EXISTS ( SELECT 1 FROM [sys].[server_event_sessions] WHERE [name] = N'XE_Error_Capture')<br>
BEGIN<br>
DROP EVENT SESSION [XE_Error_Capture] ON SERVER;<br>
END<br>
-- Create the XEvent to catch the error<br>
CREATE EVENT SESSION [XE_Error_Capture]<br>
ON SERVER<br>
ADD EVENT [sqlserver].[error_reported] (<br>
ACTION([sqlserver].[session_id],<br>
[sqlserver].[database_name],<br>
[sqlserver].[tsql_stack])<br>
WHERE ([error_number]=123456789)) -- Replace with Error Number&nbsp;&nbsp; <br>
ADD TARGET [package0].[event_file](SET FILENAME = N'C:TempXE_Error_Capture.xel')<br>
GO<br>
-- Start the Session<br>
ALTER EVENT SESSION [XE_Error_Capture] ON SERVER<br>
STATE = START;<br>
GO[/sql]

I can now use an XEvent to specifically hunt out the Errors previously discovered in the ring buffer – including the database which they are occurring in, and the tsql_stack (should I need it). This can be broken down in the usual manner, or even viewed directly with the XEvent Viewer built right into SSMS.

Neat Smile

Tip of the Day: Microsoft HoloLens and the Commercial Suite

$
0
0

Today's tip...

Hopefully you’ve had the opportunity to test out the Microsoft HoloLens, “…the first fully self-contained holographic computer running Windows 10.” Microsoft HoloLens runs either Windows Holographic (Windows 10 designed for Developer Mode) or Windows Holographic for Business in the Commercial Suite.

I have always been interested to see how HoloLens will be used in the Enterprise, and came across a great link (and kept digging into links after that).

There’s a great deal of information available online and some very interesting videos. Take a few minutes and check them out!

References:

Why can I browse the network with discovery off?

$
0
0

Quick Tip:

You open up file explorer and you click on network and you can browse items but you have network discovery turned off.  Why????

I'm sure the first thing you do is to check that network discovery and file and print sharing are indeed off for the current profile.

If that is the case then most likely the issue is the firewall.  If the firewall is off or rules modified, then you will be able to browse the network even though network discovery and file / print sharing is off.

 

 

SFB online Client Sign in and Authentication Deep Dive ;Part 2

$
0
0

Scenario: Pure Online (O365) environment, SFB user is homed Online, NO ADFS, MA (Modern Auth) is Enabled in O365

NOTE:

I have tried my best to ensure the information below is accurate. Some of the terms I use to describe things like Modern Auth provider, O365 AD, Org ID etc. may not be standard terminology, I use them solely to make the understanding simpler. My intention here is to explain what happens in the background when a SFB client signs in so that it helps engineers and customers troubleshooting issues related to Sign in and Authentication.

 

How Does it Work?

Below is a High level explanation on how the SFB online Client Sign in process works

SIP URI of the user - e1@mshaikh.onmicrosoftcom

  1. SFB client Queries DNS for Lyncdiscover.domain.com. This should point to Webdir.online.lync.com
  2. SFB Client then sends a unauthenticated GET request to Lyncdiscover.domain.com
  3. The Client is then redirected to Autodiscover (https://webdir2a.online.lync.com/Autodiscover/AutodiscoverService.svc/root/user?originalDomain=mshaikh.onmicrosoft.com)
  4. SFB Client then sends a Request to Autodiscover to discover its pool for sign in.
  5. The Client is then challenged and is provided the URL for Webticket service (https://webdir2a.online.lync.com/WebTicket/WebTicketService.svc) where it can request a Webticket
  6. The Client then sends a POST request to Webticket Service
  7. Webticket Service Redirects the Client to Modern Auth Provider (login.windows.net)
  8. Now in order to authenticate the client reaches out to Login.windows.net and requests a Token, The intention here is to Get a Token from login.windows.net, From this point onwards we will see that login.windows.net will redirect the client to login.microsoftonline.com. We will see several exchanges happening between client, login.microsoftonline.com and Login.windows.net
  9. The Client may receive a Password prompt and once the correct password is provided a token will be issued by Login.windows.net
  10. The Client then submits this token to Webticket Service
  11. Webticket service now will grant a Webticket to the Client
  12. The client then submits this webticket to Autodiscover
  13. In Response Autodiscover will provide the Pool names (sipfed2a.online.lync.com" port="443) where the client can send Register to Sign in
  14. The SFB client now sends a SIP register to the Online Edge pool (sipfed2a.online.lync.com" port="443)
  15. It is then challenged for authentication again, here the ONLY supported method of authentication is TLS-DSK, The client is provided a Cert provisioning URL (https://webdir2a.online.lync.com:443/CertProv/CertProvisioningService.svc) in the 401 unauthenticated response
  16. The SFB client then sends a request to Certprov
  17. Here again the Client is challenged for authentication and is redirected to webticket service to get Webticket
  18. The Client had already Obtained a webticket in step 11 above
  19. The client will submit the same webticket obtained in step 11 to the Cert provisioning service
  20. The Client then receives a certificate
  21. The SFB client can now send a Register again and use the certificate it downloaded for authentication

 

Below is a graphical representation of the SFB online Client Sign in process

 

clip_image001[4]

 

Detailed Explanation of SFB online Client Sign in process with LOG Snippets:

SIP URI of the user - e1@mshaikh.onmicrosoftcom

When a SFB client wants to Sign in, It needs to know where it can send its request to be able to Sign in. Whenever a user enters his SIP URI to sign in the SFB client forms an autodiscover URL using the domain name that it extracts from the users SIP URI to start the discovery process and then it sends an Unauthenticated Get request to the URL, lyncdiscover.domain.com. The response code for this request will be '200 ok' and in the response we should receive the external webservices URL for autodiscover.

You can see the request and Response below

clip_image002[4]

The SFB Client learns that it needs to Contact https://webdir2a.online.lync.com/

It then tries to Do a TCP handshake with webdir2a.online.lync.com

You can see that in a Network trace, refer Screen shot below

clip_image003[4]

Once the Initial TCP handshake is Complete, The Client will perform a TLS Handshake,

You can see that in a Network trace, refer Screen shot below

clip_image004[4]

The client then sends a request to the Autodiscover URL for its own domain (in my case @mshaikh.onmicrosoft.com) and in Response it receives the Autodiscover URL's specific to the users Tenant. You can see the request and Response below

clip_image005[4]

The client then sends a request to the user URL. We are here trying to discover a specific users home pool, hence the request will go to the “User” URL.

In the response, the Client receives a Web ticket URL, which provides the location of the WebTicketService.

You can see the request and Response below

clip_image006[4]

The Client then needs to send a Request to the Web ticket service URL in order to obtain a Web ticket. The client will send this request in a POST message to the web ticket Service. Now since Modern Authentication is enabled on the Tenant, in order to grant the webticket the client will first need to get a Token from the Modern Auth provider so the client is redirected to the Modern Auth provider URL - <af:OAuth af:authorizationUri="https://login.windows.net/common/oauth2/authorize" xmlns:af="urn:component:Microsoft.Rtc.WebAuthentication.2010" />

clip_image007[4]

The Client then sends a request to the MA/Oauth URL to request a Token, The intention here is to Get a Token from login.windows.net

From this point onwards we will see that login.windows.net will redirect the client to - login.microsoftonline.com.

Below is the Request that client sends to the MA/OAUTH URL and in response it is redirected to AD - login.microsoftonline.com

clip_image008[4]

We have to remember that "The intention here is to Get a Token from login.windows.net" we will see several exchanges happening between client to login.microsoftonline.com. Below are screen shots showing these exchanges.

clip_image009[4]

Eventually the client will receive a Cookie from login.microsoftonline.com which is shown below

clip_image010[4]

The client then submits this cookie to login.windows.net which is the MA provider and Then, the Modern Auth provider will issue the MA token.

below is the message showing this

clip_image011[4]

At this Point the client has received the below token from the Modern Auth provider login.windows.net/common/oauth2/token

clip_image012[4]

Now the client will submit this token to the webticket URL, and the Webticket service will issue the webticket, Shown below

clip_image013[3]

The client will then submit this webticket to Autodiscover and in return it will receive the POOL names where it has to send the Register to Sign in.

clip_image014[3]

Once the Client receives the pool names it will then Send a SIP REGISTER message to the SFB pool in order to sign in. . You can see that in the Client UCCAPI log file. This is shown in the snippet below

clip_image015[3]

In response the Client will now receive a 401 Unauthorized message again and the server will again ask the client to authenticate itself. Here the ONLY method of authentication that is available is TLS-DSK (Cert based authentication)

The SFB online server will provide the Client a Cert provisioning URL in the 401 you can see that in the snippet below

clip_image016[3]

This means that the Client now needs to present a Certificate that can then be used to authenticate the client. Since this is the first time the client is signing in it will NOT have the certificate installed. This certificate is ideally downloaded after the client signs in for the first time and is valid for about 8 hours.

Since the client does not have a valid certificate it now has to Re-Authenticate to the Cert provisioning service.

The Process for this will again be the same, the client has to first get a Web ticket, to get a web ticket it needs to get a Token from O365 AD, but we know that the client has already done these steps earlier. SO it already has a Web Ticket from the Web services URL. The Client needs to submit this same web ticket that it had obtained now to the Cert provisioning Service and once it submits the web ticket it will serve as a proof of authentication.

The Client learns about this by first sending a Mex request to the Cert provisioning URL. You can see that in the Trace below

clip_image017[3]

The Client then submits the Web Ticket that it had received to the Cert provisioning URL it received above, after this it receives a 200 OK and sign in is now complete

clip_image018[3]

 

Sign in is NOW Complete

SFB online Client Sign in and Authentication Deep Dive ;Part 1

$
0
0

Scenario: Pure Online (O365) environment, SFB user is homed Online, NO ADFS, MA (Modern Auth) is Disabled in O365

NOTE:

I have tried my best to ensure the information below is accurate. Some of the terms I use to describe things like Modern Auth provider, O365 AD, Org ID etc. may not be standard terminology, I use them solely to make the understanding simpler. My intention here is to explain what happens in the background when a SFB client signs in so that it helps engineers and customers troubleshooting issues related to Sign in and Authentication.

 

How Does it Work?

Below is a High level explanation on how the SFB online Client Sign in process works

clip_image001

SIP URI of the user - e1@mshaikh.onmicrosoftcom

  1. SFB client Queries DNS for Lyncdiscover.domain.com. This should point to Webdir.online.lync.com
  2. SFB Client then sends a unauthenticated GET request to Lyncdiscover.domain.com
  3. The Client is then redirected to Autodiscover (https://webdir2a.online.lync.com/Autodiscover/AutodiscoverService.svc/root/user?originalDomain=mshaikh.onmicrosoft.com)
  4. SFB Client then sends a Request to Autodiscover to discover its pool for sign in.
  5. The Client is then challenged and is provided the URL for Webticket service (https://webdir2a.online.lync.com/WebTicket/WebTicketService.svc) where it can request a Webticket
  6. The Client then sends a POST request to Webticket Service which requires the client to provide a Token from Org ID (login.microsoftonline.com)
  7. Now in order to authenticate the client reaches out to Ord ID and requests a Token
  8. The Client may receive a Password prompt and once the correct password is provided Org ID will issue a Token
  9. The Client then submits this token to Webticket Service
  10. Webticket service now will grant a Webticket to the Client
  11. The client then submits this webticket to Autodiscover
  12. In Response Autodiscover will provide the Pool names (sipfed2a.online.lync.com" port="443) where the client can send Register to Sign in
  13. The SFB client now sends a SIP register to the Online Edge pool (sipfed2a.online.lync.com" port="443)
  14. It is then challenged for authentication again, here the ONLY supported method of authentication is TLS-DSK, The client is provided a Cert provisioning URL (https://webdir2a.online.lync.com:443/CertProv/CertProvisioningService.svc) in the 401 unauthenticated response
  15. The SFB client then sends a request to Certprov
  16. Here again the Client is challenged for authentication and is redirected to webticket service to get Webticket
  17. The Client had already Obtained a webticket in step 10 above
  18. The client will submit the same webticket obtained in step 10 to the Cert provisioning service
  19. The Client then receives a certificate
  20. The SFB client can now send a Register again and use the certificate it downloaded for authentication

 

Below is a graphical representation of the SFB online Client Sign in process

 

clip_image002

 

 

Detailed Explanation of SFB online Client Sign in process with LOG Snippets:

SIP URI of the user - e1@mshaikh.onmicrosoftcom

When a SFB client wants to Sign in, It needs to know where it can send its request to be able to Sign in. Whenever a user enters his SIP URI to sign in to the SFB client, The client forms an autodiscover URL using the domain name that it extracts from the users SIP URI to start the discovery process and then it sends an Unauthenticated Get request to the URL, lyncdiscover.domain.com. The response code for this request will be '200 ok' and in the response we should receive the external webservices URL for autodiscover.

You can see the request and Response below

clip_image004

The SFB Client learns that it needs to Contact https://webdir2a.online.lync.com/

It then tries to Do a TCP handshake with webdir2a.online.lync.com

You can see that in a Network trace, refer Screen shot below

clip_image005

Once the Initial TCP handshake is Complete, The Client will perform a TLS Handshake,

You can see that in a Network trace, refer Screen shot below

clip_image006

The client then sends a request to the Autodiscover URL for its own domain (in this case @mshaikh.onmicrosoft.com) and in Response it receives the Autodiscover URL's specific to the users Tenant. You can see the request and Response below

clip_image007

The client then sends a request to the user URL. We are here trying to discover a specific users home pool, hence the request will go to the “User” URL.

In the response, the Client receives a Web ticket URL, which provides the location of the WebTicketService.

You can see the request and Response below

clip_image008

The Client then needs to send a Request to the Web ticket service URL in order to obtain a Web ticket. The client will send this request in a POST message to the web ticket Service and in response it receives the actual individual Web ticket service URL's

clip_image009

The Client has to submit a Request to this web ticket URL now in order to obtain a web ticket. But if it does that then it will need to authenticate first, unless the Client authenticates itself it will not be issued a web ticket. Since this user is Homed in SFB online and was created directly in O365 AD the Client needs to reach out to O365 AD (also known as Org ID) to get authenticated first. Once it is authenticated by Org ID then the Client will receive a Token from Org ID that will prove that the client has been authenticated. The Client will then reach out back to the Webservice URL and submit the Token it received from Org ID.

SO right now the Client has to First reach out to Org ID in order to authenticate and Get a Token.

The process of reaching out to O365 AD is initiated with the help of Microsoft Online Sign in Assistant that is installed on the Computer. You can find logging for this in C:MSOTraceLiteMSOCredprov.txt

The Org ID is located at https://login.microsoftoline.com

You can see the Client reaching out to https://login.microsoftoline.com and requesting a Token and subsequent responses will show that it receives a Token from https://login.microsoftoline.com after entering the password when prompted.

clip_image010

clip_image011

Once the Client receives a Token from Org ID it then submits this token to the Web Ticket Service https://webdir2a.online.lync.com/WebTicket/WebTicketAdvancedService.svc/WsFed_bearer

In Response the Web Ticket Service will now Issue the Client a Web Ticket. You can see this in the Trace below

clip_image012

The Client will Then Submit this web ticket back to the AutoDiscover User URL - /Autodiscover/AutodiscoverService.svc/root/user?originalDomain=mshaikh.onmicrosoft.com&sipuri=e1@mshaikh.onmicrosoft.com

In response it will now receive the Internal and External addresses of the Pool names where the user is Homed.

You can see this in the trace below

clip_image013

Once the Client receives the pool names it will then Send a SIP REGISTER message to the SFB pool in order to sign in. Depending on the Users location this will ideally be sent to the SFB online Edge pool. You can see that in the Client UCCAPI log file. This is shown in the snippet below

clip_image014

In response the Client will now receive a 401 Unauthorized message again and the server will again ask the client to authenticate itself. Here the ONLY method of authentication that is available is TLS-DSK (Cert based authentication)

The SFB online server will provide the Client a Cert provisioning URL in the 401 you can see that in the snippet below

clip_image015

This means that the Client now needs to present a Certificate that can then be used to authenticate the client. Since this is the first time the client is signing in it will NOT have the certificate installed. This certificate is ideally downloaded after the client signs in for the first time and is valid for about 8 hours.

Since the client does not have a valid certificate it now has to Re-Authenticate to the Cert provisioning service.

The Process for this will again be the same, the client has to first get a Web ticket, to get a web ticket it needs to get a Token from O365 AD, but we know that the client has already done these steps earlier. SO it already has a Web Ticket from the Web services URL. The Client needs to submit this same web ticket that it had obtained now to the Cert provisioning Service and once it submits the web ticket it will serve as a proof of authentication.

The Client learns about this by first sending a Mex request to the Cert provisioning URL. You can see that in the Trace below

clip_image016

The Client then submits the Web Ticket that it had received to the Cert provisioning URL it received above, after this it receives a 200 OK and a Certificate is now downloaded.

clip_image017

The clients will then send a SIP REGISTER again to the SFB Pool in which it submit this certificate the pool and in response it will receive a 200 OK.

Sign in is NOW complete!!!

SFB online Client Sign in and Authentication Deep Dive ;Part 3

$
0
0

Scenario: Pure Online (O365) environment, SFB user is homed Online, ADFS is Configured, MA (Modern Auth) is Disabled in O365

NOTE:

I have tried my best to ensure the information below is accurate. Some of the terms I use to describe things like Modern Auth provider, O365 AD, Org ID etc. may not be standard terminology, I use them solely to make the understanding simpler. My intention here is to explain what happens in the background when a SFB client signs in so that it helps engineers and customers troubleshooting issues related to Sign in and Authentication.

How Does it Work?

Below is a High level explanation on how the SFB online Client Sign in process works

clip_image001[4]

SIP URI of the user - NJ@JohnsonDataSystems.com

  1. SFB client Queries DNS for Lyncdiscover.domain.com. This should point to Webdir.online.lync.com
  2. SFB Client then sends a unauthenticated GET request to Lyncdiscover.domain.com
  3. The Client is then redirected to Autodiscover https://webdir2a.online.lync.com/Autodiscover/AutodiscoverService.svc/root/user?originalDomain=johnsondatasystems.com
  4. SFB Client then sends a Request to Autodiscover to discover its pool for sign in.
  5. The Client is then challenged and is provided the URL for Webticket service (https://webdir2a.online.lync.com/WebTicket/WebTicketService.svc) where it can request a Webticket
  6. The Client then sends a POST request to Webticket Service which requires the client to provide a Token from Org ID (login.microsoftonline.com)
  7. Now in order to authenticate the client reaches out to Ord ID and requests a Token
  8. Since the tenant is enabled for ADFS the client is redirected to the ON Premise ADFS server https://sts.cloudsfb.com
  9. SFB client will then send a request to ADFS server and request a token
  10. The Client may receive a Password prompt (or previously saved password from credential manager is passed) and once the correct password is provided, ADFS will issue a Token to the client
  11. The Client then submits this token to Org ID
  12. ORG ID will now issue its own Token to the client
  13. The Client then submits this token that it received from ORG ID to Webticket Service
  14. Webticket service now will grant a Webticket to the Client
  15. The client then submits this webticket to Autodiscover
  16. In Response Autodiscover will provide the Pool names (sipfed2a.online.lync.com" port="443) where the client can send Register to Sign in
  17. The SFB client now sends a SIP register to the Online Edge pool (sipfed2a.online.lync.com" port="443)
  18. It is then challenged for authentication again, here the ONLY supported method of authentication is TLS-DSK, The client is provided a Cert provisioning URL (https://webdir2a.online.lync.com:443/CertProv/CertProvisioningService.svc) in the 401 unauthenticated response
  19. The SFB client then sends a request to Certprov
  20. Here again the Client is challenged for authentication and is redirected to webticket service to get Webticket
  21. The Client had already Obtained a webticket in step 14 above
  22. The client will submit the same webticket obtained in step 14 to the Cert provisioning service
  23. The Client then receives a certificate
  24. The SFB client can now send a Register again and use the certificate it downloaded for authentication

Below is a graphical representation of the SFB online Client Sign in process

clip_image002[4]

Detailed Explanation of SFB online Client Sign in process with LOG Snippets:

SIP URI of the user - NJ@JohnsonDataSystems.com

When a SFB client wants to Sign in, It needs to know where it can send its request to be able to Sign in. Whenever a user enters his SIP URI to sign in the SFB client forms an autodiscover URL using the domain name that it extracts from the users SIP URI to start the discovery process and then it sends an Unauthenticated Get request to the URL, lyncdiscover.domain.com. The response code for this request will be '200 ok' and in the response we should receive the external webservices URL for autodiscover.

You can see the request and Response below

clip_image003[4]

The SFB Client learns that it needs to Contact https://webdir2a.online.lync.com/

It then tries to Do a TCP handshake with webdir2a.online.lync.com, Followed by a TLS handshake. (I haven't included the TCP and TLS handshake screen shots here, you can see those if you collect a Network trace while signing in)

The client then sends a request to the Autodiscover URL for its own domain (in my case @JohnsonDataSystems.com) and in Response it receives the Autodiscover URL's specific to the users Tenant. You can see the request and Response below

clip_image004[4]

The client then sends a request to the user URL. We are here trying to discover a specific users home pool, hence the request will go to the “User” URL.

In the response, the Client receives a Web ticket URL, which provides the location of the WebTicketService.

You can see the request and Response below

clip_image005[4]

The Client then needs to send a Request to the Web ticket service URL in order to obtain a Web ticket. The client will send this request in a POST message to the web ticket Service and in response it receives the actual individual Web ticket service URL's

clip_image006[3]

The Client has to submit a Request to this web ticket URL now in order to obtain a web ticket. But if it does that then it will need to authenticate first, unless the Client authenticates itself it will not be issued a web ticket. Since this user is Homed in SFB online the Client needs to reach out to O35 AD (Org ID) to get authenticated first

SO right now the Client has to First reach out to Org ID in order to authenticate and Get a Token.

The process of reaching out to Org ID is initiated with the help of Microsoft Online Sign in Assistant (also known as IDCRL) that is installed on the Computer.

You may not always see this in the fiddler trace, this attempt to get Token from AD is called Org ID auth and is captured in IDCRL logs on the client PC

The Client will try to reach Org ID (O365 AD) to get a token, but since the Tenant is Enabled for ADFS the O365 AD (org ID) will redirect the client to the ADFS Server URL and the client will have to request a Token from ADFS

The way this works is

  1. The client first tries to reach Org ID (O365 AD) to request a Token
  2. Here it learns that the tenant is enabled for ADFS so it has to now go to the ADFS URL first and ask for a token from ADFS
  3. The client reaches out to ADFS and requests a Token
  4. ADFS will challenge for authentication which will cause a Password prompt to appear, or the user credential stored in the credential manager will be passed to ADFS in which case the user will not be prompted for password
  5. Once the password is provided ADFS will provide the client a Token
  6. The client will then submit this token back to Org ID (O365 AD)
  7. Org ID will in turn provide the client the Org ID token

You can view all the above if you open the IDCRL log (MSOTrace folder) in the IDCRL parser, A successful token retrieval from ADFS and Org ID should look like below in the Parser (A more detailed explanation can be found in the bottom of this document if needed for reference)

clip_image007[3]

Once the Client receives a Token from O365 AD (Org ID) it then submits this token to the Web Ticket Service https://webdir2a.online.lync.com/WebTicket/WebTicketAdvancedService.svc/WsFed_bearer

In Response the Web Ticket Service will now Issue the Client a Web Ticket. You can see this in the Trace below

clip_image008[3]

The Client will Then Submit this web ticket back to the AutoDiscover User URL - https://webdir2a.online.lync.com/Autodiscover/AutodiscoverService.svc/root/user?originalDomain=johnsondatasystems.com&sipuri=nj@johnsondatasystems.com

In response it will now receive the Internal and External addresses of the Pool names where the user is Homed.

You can see this in the trace below

clip_image009[3]

Once the Client receives the pool names it will then Send a SIP REGISTER message to the SFB pool in order to sign in. . You can see that in the Client UCCAPI log file. This is shown in the snippet below

clip_image010[3]

In response the Client will now receive a 401 Unauthorized message again and the server will again ask the client to authenticate itself. Here the ONLY method of authentication that is available is TLS-DSK (Cert based authentication)

The SFB online server will provide the Client a Cert provisioning URL in the 401 you can see that in the snippet below

clip_image011[3]

This means that the Client now needs to present a Certificate that can then be used to authenticate the client. Since this is the first time the client is signing in it will NOT have the certificate installed. This certificate is ideally downloaded after the client signs in for the first time and is valid for about 8 hours.

Since the client does not have a valid certificate it now has to Re-Authenticate to the Cert provisioning service.

The Process for this will again be the same, the client has to first get a Web ticket, to get a web ticket it needs to get a Token from O365 AD, but we know that the client has already done these steps earlier. SO it already has a Web Ticket from the Web services URL. The Client needs to submit this same web ticket that it had obtained now to the Cert provisioning Service and once it submits the web ticket it will serve as a proof of authentication.

The Client learns about this by first sending a Mex request to the Cert provisioning URL. You can see that in the Trace below

clip_image012[3]

The Client then submits the Web Ticket that it had received to the Cert provisioning URL it received above, after this it receives a 200 OK and sign in is now complete

clip_image013[3]

Sign in is now complete!!

__________________________________________________________________________________________________________________________________________

 

More Information:

Detailed log snippets showing the client requesting and receiving Token from Org ID and ADFS.

We Discussed above that in order to obtain a webticket the Client has to get authenticated with O365 AD. We described this process as below

  1. The client first tries to reach Org ID (O365 AD) to request a Token
  2. Here it learns that the tenant is enabled for ADFS so it has to now go to the ADFS URL first and ask for a token from ADFS
  3. The client reaches out to ADFS and requests a Token
  4. ADFS will challenge for authentication which will cause a Password prompt to appear, or the user credential stored in the credential manager will be passed to ADFS in which case the user will not be prompted for password
  5. Once the password is provided ADFS will provide the client a Token
  6. The client will then submit this token back to Org ID (O365 AD)
  7. Org ID will in turn provide the client the Org ID token

 

We can see this in the IDCRL Logs which are located on the Client PC in the MSOTrace folder on the C:Drive.

Below are snippets from IDCRL logs showing all the above steps in detail for reference.

 

Below is the Client finding out that it needs to go to ADFS URL to authenticate

 

<?xml version="1.0" encoding="UTF-8"?><RealmInfo Success="true">

<Login>nj@johnsondatasystems.com</Login>

<NameSpaceType>Federated</NameSpaceType>

<DomainName>JOHNSONDATASYSTEMS.COM</DomainName>

<FederationGlobalVersion>-1</FederationGlobalVersion><AuthURL>https://sts.johnsondatasystems.com/adfs/ls/</AuthURL>

<IsFederatedNS>true</IsFederatedNS>

<STSAuthURL>https://sts.johnsondatasystems.com/adfs/services/trust/2005/usernamemixed</STSAuthURL>

<FederationTier>0</FederationTier><FederationBrandName>JOHNSONDATASYSTEMS.COM</FederationBrandName>

<AllowFedUsersWLIDSignIn>false</AllowFedUsersWLIDSignIn>

<MEXURL>https://sts.johnsondatasystems.com/adfs/services/trust/mex</MEXURL>

<SAML_AuthURL></SAML_AuthURL><PreferredProtocol>1</PreferredProtocol>

 

The Client then sends a request to the ADFS URL to get a Token

<s:Header>

<wsa:Action s:mustUnderstand="1">http://schemas.xmlsoap.org/ws/2005/02/trust/RST/Issue</wsa:Action>

<wsa:To s:mustUnderstand="1">https://sts.johnsondatasystems.com:443/adfs/services/trust/2005/usernamemixed</wsa:To>

<wsa:MessageID>1507666614</wsa:MessageID>

<wsse:Security><wsse:UsernameToken wsu:Id="user">

<wsse:Username>nj@johnsondatasystems.com</wsse:Username>

<wsse:Password>*********</wsse:Password>

</wsse:UsernameToken><wsu:Timestamp Id="Timestamp"><wsu:Created>2017-10-10T20:16:53Z</wsu:Created><wsu:Expires>2017-10-10T20:21:53Z</wsu:Expires>

</wsu:Timestamp></wsse:Security>

</s:Header>

<wst:RequestSecurityToken Id="RST0">

<wst:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</wst:RequestType>

<wsp:AppliesTo><wsa:EndpointReference><wsa:Address>urn:federation:MicrosoftOnline</wsa:Address></wsa:EndpointReference></wsp:AppliesTo>

<wst:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</wst:KeyType></wst:RequestSecurityToken></s:Body></s:Envelope>

It will then receive the Token from ADFS

<s:Body>

<t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">

<t:Lifetime>

<wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2017-10-10T20:16:55.065Z</wsu:Created>

<wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2017-10-10T21:16:55.065Z</wsu:Expires>

</t:Lifetime>

<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">

<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing"><wsa:Address>urn:federation:MicrosoftOnline</wsa:Address></wsa:EndpointReference>

</wsp:AppliesTo>

<t:RequestedSecurityToken>**********</t:RequestedSecurityToken>

 

Once the Client receives the Token from ADFS it then submits this token to O365 AD (Org ID) and requests a token from Org ID

Below is the request that the client sends to Org ID asking for a token

<s:Header>

<wsa:Action s:mustUnderstand="1">http://schemas.xmlsoap.org/ws/2005/02/trust/RST/Issue</wsa:Action>

<wsa:To s:mustUnderstand="1">https://login.microsoftonline.com:443/rst2.srf</wsa:To>

<wsa:MessageID>1507666615</wsa:MessageID>

<ps:AuthInfo xmlns:ps="http://schemas.microsoft.com/Passport/SoapServices/PPCRL" Id="PPAuthInfo">

<ps:HostingApp>{0000003F-002B-0000-4C0A-AE614E000000}</ps:HostingApp>

<ps:BinaryVersion>7</ps:BinaryVersion>

<ps:UIVersion>1</ps:UIVersion>

<ps:Cookies></ps:Cookies>

<ps:RequestParams>AQAAAAIAAABsYwQAAAAxMDMz</ps:RequestParams>

</ps:AuthInfo><wsse:Security>*********</wsse:Security>

</s:Header>

<s:Body>

<ps:RequestMultipleSecurityTokens xmlns:ps="http://schemas.microsoft.com/Passport/SoapServices/PPCRL" Id="RSTS">

<wst:RequestSecurityToken Id="RST0">

<wst:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</wst:RequestType>

<wsp:AppliesTo>

<wsa:EndpointReference><wsa:Address>http://Passport.NET/tb</wsa:Address></wsa:EndpointReference>

</wsp:AppliesTo>

</wst:RequestSecurityToken><wst:RequestSecurityToken Id="RST1">

<wst:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</wst:RequestType>

<wsp:AppliesTo>

<wsa:EndpointReference><wsa:Address>https://webpooldm12a05.infra.lync.com/WebTicket/WebTicketAdvancedService.svc/WsFed_bearer</wsa:Address></wsa:EndpointReference></wsp:AppliesTo>

</wst:RequestSecurityToken></ps:RequestMultipleSecurityTokens>

</s:Body>

 

Org ID then issues a Token to the client which we can see below

<wst:RequestedProofToken><wst:BinarySecret>x5hTCJZbYuSHUl0mKtoAab/zN4DE+YLW</wst:BinarySecret></wst:RequestedProofToken>

</wst:RequestSecurityTokenResponse><wst:RequestSecurityTokenResponse>

<wst:TokenType>urn:oasis:names:tc:SAML:1.0</wst:TokenType>

<wsp:AppliesTo xmlns:wsa="http://www.w3.org/2005/08/addressing">

<wsa:EndpointReference><wsa:Address>https://webpooldm12a05.infra.lync.com/WebTicket/WebTicketAdvancedService.svc/WsFed_bearer</wsa:Address></wsa:EndpointReference>

</wsp:AppliesTo>

<wst:Lifetime><wsu:Created>2017-10-10T20:16:56Z</wsu:Created><wsu:Expires>2017-10-11T04:16:56Z</wsu:Expires></wst:Lifetime>

<wst:RequestedSecurityToken>**********</wst:RequestedSecurityToken>


SFB online Client Sign in and Authentication Deep Dive ;Part 4

$
0
0

Scenario: Pure Online (O365) environment, SFB user is homed Online, ADFS is Configured, MA (Modern Auth) is Enabled in O365

NOTE:

I have tried my best to ensure the information below is accurate. Some of the terms I use to describe things like Modern Auth provider, O365 AD, Org ID etc. may not be standard terminology, I use them solely to make the understanding simpler. My intention here is to explain what happens in the background when a SFB client signs in so that it helps engineers and customers troubleshooting issues related to Sign in and Authentication.

 

How Does it Work?

Below is a High level explanation on how the SFB online Client Sign in process works

 

SIP URI of the user - NJ@JohnsonDataSystems.com

  1. SFB client Queries DNS for Lyncdiscover.domain.com. This should point to Webdir.online.lync.com
  2. SFB Client then sends a unauthenticated GET request to Lyncdiscover.domain.com
  3. The Client is then redirected to Autodiscover https://webdir2a.online.lync.com/Autodiscover/AutodiscoverService.svc/root/user?originalDomain=johnsondatasystems.com
  4. SFB Client then sends a Request to Autodiscover to discover its pool for sign in.
  5. The Client is then challenged and is provided the URL for Webticket service (https://webdir2a.online.lync.com/WebTicket/WebTicketService.svc) where it can request a Webticket
  6. The Client then sends a POST request to Webticket Service
  7. Webticket Service Redirects the Client to Modern Auth Provider (login.windows.net)
  8. Now in order to authenticate the client reaches out to Login.windows.net and requests a Token, The intention here is to Get a Token from login.windows.net
  9. From this point onwards we will see that login.windows.net will redirect the client to login.microsoftonline.com
  10. Since the tenant is enabled for ADFS the client is then redirected to the ON Premise ADFS server https://sts.cloudsfb.com
  11. SFB client will then send a request to ADFS server and request a token
  12. The Client may receive a Password prompt (or previously saved password from credential manager is passed) and once the correct password is provided, ADFS will issue a Token to the client
  13. The Client then submits this token to login.microsoftonline.com which in turn passes the client to Login.windows.net
  14. Login.windows.net will now issue the Modern Auth Token to the client
  15. The Client then submits this token that it received from Login.windows.net to Webticket Service
  16. Webticket service now will grant a Webticket to the Client
  17. The client then submits this webticket to Autodiscover
  18. In Response Autodiscover will provide the Pool names (sipfed2a.online.lync.com" port="443) where the client can send Register to Sign in
  19. The SFB client now sends a SIP register to the Online Edge pool (sipfed2a.online.lync.com" port="443)
  20. It is then challenged for authentication again, here the ONLY supported method of authentication is TLS-DSK, The client is provided a Cert provisioning URL (https://webdir2a.online.lync.com:443/CertProv/CertProvisioningService.svc) in the 401 unauthenticated response
  21. The SFB client then sends a request to Certprov
  22. Here again the Client is challenged for authentication and is redirected to webticket service to get Webticket
  23. The Client had already Obtained a webticket in step 16 above
  24. The client will submit the same webticket obtained in step 16 to the Cert provisioning service
  25. The Client then receives a certificate
  26. The SFB client can now send a Register again and use the certificate it downloaded for authentication

Below is a graphical representation of the SFB online Client Sign in process

clip_image001

Detailed Explanation of SFB online Client Sign in process with LOG Snippets:

SIP URI of the user - NJ@JohnsonDataSystems.com

When a SFB client wants to Sign in, It needs to know where it can send its request to be able to Sign in. Whenever a user enters his SIP URI to sign in the SFB client forms an autodiscover URL using the domain name that it extracts from the users SIP URI to start the discovery process and then it sends an Unauthenticated Get request to the URL, lyncdiscover.domain.com. The response code for this request will be '200 ok' and in the response we should receive the external webservices URL for autodiscover.

You can see the request and Response below

clip_image002

The SFB Client learns that it needs to Contact https://webdir2a.online.lync.com/

It then tries to Do a TCP handshake with webdir2a.online.lync.com, Followed by a TLS handshake. (I haven't included the TCP and TLS handshake screen shots here, you can see those if you collect a Network trace while signing in)

The client then sends a request to the Autodiscover URL for its own domain (in my case @JohnsonDataSystems.com) and in Response it receives the Autodiscover URL's specific to the users Tenant. You can see the request and Response below

clip_image003

The client then sends a request to the user URL. We are here trying to discover a specific users home pool, hence the request will go to the “User” URL.

In the response, the Client receives a Web ticket URL, which provides the location of the WebTicketService.

You can see the request and Response below

clip_image004

The Client then needs to send a Request to the Web ticket service URL in order to obtain a Web ticket. The client will send this request in a POST message to the web ticket Service. Now since Modern Authentication is enabled on the Tenant, in order to grant the webticket the client will first need to get a Token from the Modern Auth provider so the client is redirected to the Modern Auth provider URL - <af:OAuth af:authorizationUri="https://login.windows.net/common/oauth2/authorize" xmlns:af="urn:component:Microsoft.Rtc.WebAuthentication.2010" />

clip_image005

The Client then sends a request to the MA/Oauth URL to request a Token, The intention here is to Get a Token from login.windows.net

From this point onwards we will see that login.windows.net will redirect the client to - login.microsoftonline.com.

Below is the Request that client sends to the MA/OAUTH URL and in response it is redirected to AD - login.microsoftonline.com

clip_image006

We have to remember that "The intention here is to Get a Token from login.windows.net" we will see several exchanges happening between client to login.microsoftonline.com. Below are screen shots showing these exchanges.

clip_image007

Now, Since the customer has ADFS, the Modern Auth provider will redirect the client to the ADFS Server. Below is the screen shot showing login.microsoftonline.com redirecting the client to ADFS

clip_image008

The Client will then reach out to ADFS to get an ADFS Token. (This is where the user might get prompted to enter credentials or if his credentials are already stored in credential manager then those credentials will be passed in the background and the user may not see the prompt) The Next Two Screen shots show that

clip_image009

clip_image010

The Client will then Submit this Token to Login.microsoftonline.com, where it will be redirected again to https://Login.windows.net and https://Login.windows.net will finally provide the client with the Modern Auth Token, This is shown in the two screen shots below

clip_image011

clip_image012

Now the client will submit this token to the webticket URL, and the Webticket service will issue the webticket, Shown below

clip_image013

The client will then submit this webticket to Autodiscover and in return it will receive the POOL names where it has to send the Register to Sign in.

clip_image014

Once the Client receives the pool names it will then Send a SIP REGISTER message to the SFB pool in order to sign in. . You can see that in the Client UCCAPI log file. This is shown in the snippet below

clip_image015

In response the Client will now receive a 401 Unauthorized message again and the server will again ask the client to authenticate itself. Here the ONLY method of authentication that is available is TLS-DSK (Cert based authentication)

The SFB online server will provide the Client a Cert provisioning URL in the 401 you can see that in the snippet below

clip_image016

This means that the Client now needs to present a Certificate that can then be used to authenticate the client. Since this is the first time the client is signing in it will NOT have the certificate installed. This certificate is ideally downloaded after the client signs in for the first time and is valid for about 8 hours.

Since the client does not have a valid certificate it now has to Re-Authenticate to the Cert provisioning service.

The Process for this will again be the same, The client will send a request to the Cert Provisioning URL where it will be challenged to get a Webticket. The client has to first get a Web ticket from the webticket service URL, to get a web ticket it needs to get a Token from Modern Auth Provider, but we know that the client has already done these steps earlier. SO it already has a Web Ticket from the Web services URL. The Client needs to submit this same web ticket that it had obtained to the Cert provisioning Service and once it submits the web ticket it will serve as a proof of authentication.

The Client learns about this by first sending a Mex request to the Cert provisioning URL. You can see that in the Trace below

clip_image017

The Client then submits the Web Ticket that it had received previously to the Cert provisioning URL it received above, after this it receives a 200 OK in which it receives the Certificate and sign in is now complete

clip_image018

Office 365 Weekly Digest | April 1 – 7, 2018

$
0
0

Welcome to the April 1 - 7, 2018 edition of the Office 365 Weekly Digest.

Continuing the previous week's trend, five updates for Outlook were added to the Office 365 Roadmap.

While there are no brand new events in this digest, most of the online customer immersion experiences are now booking sessions for May. There's also still time to register for the Azure Active Directory webinars.

The Blog Roundup spotlight continues to shine on Microsoft Teams with several updates announced last week. There was also a major announcement regarding the future of Office 365 URL and IP address updates, including a preview of the new web service. Additional posts include new features for Stream, changes to the SharePoint Online social feed coming in June 2018 and the second part of the Exchange TLS 1.2 guidance series.

Wrapping up the noteworthy items are updates to the Office 365 URLs and IP address ranges, a look at the new Microsoft 365 Security and Compliance Center, FedRAMP compliance for Outlook for Android and iOS, as well as upcoming improvements to the Azure Active Directory sign-in experience.

 

OFFICE 365 ROADMAP

 

Below are the items added to the Office 365 Roadmap last week:

 

Feature ID

Title Description

Status

Added

Estimated Release

More Info

20974

Outlook for Windows: Get suggestions for Calendar meeting and appointment locations Start typing a meeting room or venue for your appointment or meeting. Outlook will look for matching locations.

In development

04/02/2018

Q3 CY2018

n / a

27326

Groups in Outlook - Members of newly created groups will receive all group email in their inbox by default We received feedback from our customers that in some cases users were missing the emails sent to groups that they were members of, now, when new Office 365 groups are created from Outlook, the default subscription option will be set to receiving all group email in the Inbox for group members. Previously, this was set to not subscribed. This feature is available in Outlook on the web and rolling out across Outlook. Group owners and members can change this setting at any time.

Rolling out

04/03/2018

April CY2018

n / a

24636

Large Address Aware support in Outlook 2016 may affect some COM add-ins Build 16.0.8528.2147 (Version 1709) of the 32-bit version of Outlook 2016 for Windows has been updated to be Large Address Aware (LAA). This increases the maximum address space available to Outlook from 2 GB to 4 GB when it is running on a 64-bit version of Windows. This is key to improved graphics rendering in Outlook when using newer displays that support higher screen resolutions. While LAA Outlook has been extensively tested, there is the possibility that some third-party or in-house developed Outlook COM Add-ins may experience issues with the change. Only 32-bit, 1709 versions of Outlook 2016 or later and running on a 64-bit version of Windows may be impacted. For details refer to the article linked in "More Info".

Rolling out

04/05/2018

Q4 2017

Large Address Aware in Outlook 2016

25007

Outlook for Windows: Simplified Office 365 and Outlook.com Shared and Delegate Calendar options We've simplified the process for accepting invitations and assigning delegates for shared calendars from Outlook for Windows. Now, calendars shared from Outlook for Windows will also be available on Outlook for iOS and Android.

In development

04/05/2018

Q1 CY2018

n / a

26978

Outlook for iOS: Block links to external images Set up your account to block links in emails to external images.

In development

04/07/2018

Q4 CY2018

n / a

 

 

UPCOMING EVENTS

 

Azure Active Directory Webinars for April

When: Multiple sessions currently scheduled from April 3 - 25, 2018 | Are you looking to deploy Azure Active Directory quickly and easily? We are offering free webinars on key Azure Active Directory deployment topics to help you get up and running. Sessions include Getting Ready for Azure AD, Managing Partner and Vendor Access Using Azure B2B Collaboration, Introduction to Azure AD B2C, Choosing the Right Authentication Method for Azure AD, and more. Each 1-hour webinar is designed to support IT Pros in quickly rolling out Azure Active Directory features to their organization. All webinars are free of cost and will include an anonymous Q&A session with our Engineering Team. So, come with your questions! Capacity is limited. Sign up for one or all of the sessions today!  Note: There are also some sessions available on-demand.

 

Productivity Hacks to Save Time & Simplify Workflows

When: Wednesday, April 11, 2018 and Wednesday, April 18, 2018 at 1pm ET | This 90-minute hands-on experience will give you the opportunity to test drive Windows 10, Office 365 and Dynamics 365. A trained facilitator will guide you as you apply these tools to your own business scenarios and see how they work for you. During this interactive session, you will: (1) Discover how you can keep your information more secure without inhibiting your workflow, (2) Learn how to visualize and analyze complex data, quickly zeroing in on the insights you need, (3) See how multiple team members can access, edit and review documents simultaneously, and (4) Gain skills that will save you time and simplify your workflow immediately. Each session is limited to 12 participants, reserve your seat now.

 

Visualizing, Analyzing & Sharing Your Data Without Having to be a BI Expert

When: Tuesday, May 1, 2018 at 12pm ET and Tuesday, May 8, 2018 at 1pm ET | This 2-hour hands-on experience will give you the opportunity to test drive the latest business analytics tools. A trained facilitator will guide you as you apply these tools to your own business scenarios and see how they can work throughout your organization. During this interactive session, you will explore how to: (1) Locate and organize large amounts of data from multiple sources, (2) Visualize complex data and identify trends quickly without having to be a BI expert, (3) Find and collaborate with company experts on the fly, even if they work in another part of the country, and (4) Gather colleague's opinions easily and eliminate communication and process bottlenecks. Each session is limited to 12 participants, reserve your seat now.

 

Transforming your business to meet the changing market and needs of your customers

When: Thursday, May 3, 2018 at 12pm and 3pm ET | This 2-hour hands-on experience will give you the opportunity to test drive Windows 10, Office 365 and Dynamics 365. A trained facilitator will guide you as you apply these tools to your own business scenarios and see how they work for you. During this interactive session, you will: (1) Use digital intelligence to build personalized experiences across all customer touchpoints, (2) Improve customer service through a single, unified experience that delivers end-to-end service across every channel, (3) Increase customer satisfaction with intelligent scheduling, native mobile support, and remote asset monitoring to help you get the job done right the first time, and (4) Run your project-based business more productively by bringing people, processes, and automation technology together through a unified experience. Each session is limited to 12 participants, reserve your seat now.

 

Hands-on with security in a cloud-first, mobile-first world

When: Thursday, May 10, 2018 at 12pm and 3pm ET | This 2-hour hands-on session will give you the opportunity to try Microsoft technology that secures your digital transformation with a comprehensive platform, unique intelligence, and partnerships. A trained facilitator will guide you as you apply these tools to your own business scenarios and see how they work for you. During this interactive session, you will: (1) Detect and protect against external threats by monitoring, reporting and analyzing activity to react promptly to provide organization security, (2) Protect your information and reduce the risk of data loss, (3) Provide peace of mind with controls and visibility for industry-verified conformity with global standards in compliance, (4) Protect your users and their accounts, and (5) Support your organization with enhanced privacy and compliance to meet the General Data Protection Regulation. Each session is limited to 12 participants, reserve your seat now.

 

Connecting, Organizing & Collaborating with Your Team

When: Tuesday, May 15, 2018 at 12pm ET | During this session, you will have the opportunity to experience Windows 10, Office 365 and Microsoft's newest collaboration tool: Microsoft Teams. A trained facilitator will guide you as you apply these tools to your own business scenarios and see how they work for you. During this interactive session, you will explore how to use Microsoft Teams and Office 365 to: (1) Create a hub for team work that works together with your other Office 365 apps, (2) Build customized options for each team, (3) Keep everyone on your team engaged, (4) Coauthor and share content quickly, and (5) Gain skills that will save you time and simplify your workflow immediately. Each session is limited to 12 participants, reserve your seat now.

 

BLOG ROUNDUP

 

What's New in Microsoft Teams - April 2018 Update

Based on your feedback, we continue to add new capabilities on a regular basis to make Microsoft Teams an even more powerful hub for teamwork. This post provides a summary of the main updates that have rolled out or have started to roll out in the last few weeks. These updates include: (1) Skype for Business Online contacts imported to Teams, (2) Unified Presence between Teams and Skype for Business, (3) Out of office status in Teams, (4) Skype for Business Interop with Persistent Chat, (5) Improved call notifications, (6) Increase maximum to 200 channels per team, (7) Installing the Teams client using MSI, (8) a new Teams and Skype for Business Admin Center, and many more.

Related:

 

Announcing: Office 365 endpoint categories and Office 365 IP Address and URL web service

Microsoft recently published a set of connectivity principles for Office 365 which provides concise guidance on the recommended ways of achieving optimal performance and connectivity to Office 365. The first of these principles is to Identify and differentiate Office 365 network traffic using Microsoft published endpoints. Endpoints include IP Addresses and URLs that are used to connect to Office 365. We have released a preview of a new web service that publishes these endpoints making it easier for enterprise customers to evaluate, configure, and stay up to date with changes in Office 365 network endpoints. These web services will eventually replace the HTML, XML, and RSS data published today. Usage documentation for the IP Address and URL web services are detailed in Managing Office 365 Endpoints – Web Service. We are also publishing three categories for Office 365 network endpoints as attributes of this data: (1) Optimize for a small number of endpoints that require low latency unimpeded connectivity which should bypass proxy servers, network SSL break and inspect devices, and network hairpins, (2) Allow for a larger number of endpoints that benefit from low latency unimpeded connectivity. Although not expected to cause failures, we also recommend bypassing proxy servers, network SSL break and inspect devices, and network hairpins. Good connectivity to these endpoints is required for Office 365 to operate normally, and (3) Default for other Office 365 endpoints which can be directed to the default internet egress location for the company WAN. Use of these categories, how they simplify connectivity to Office 365, and what actions you can take to make use of them is detailed in Office 365 Network Connectivity Principles. Note: During preview we'll work to ensure that the data is accurate, but we will not be providing support outside of business hours. We will be seeking feedback from network device vendors and enterprise customers. The web services are in preview now, and we encourage you to start migrating any scripts that you have for working with this data today. While in preview, you should not rely on the data from the web services in production. Also, while in preview only the Office 365 worldwide commercial instance is annotated with endpoint categories. Endpoints for other service instances such as US Gov GCC High and others are temporarily all set to Allow. We plan to make these supported and with GA status in the coming months.

Related:

 

New Stream Features - April 2018

We continue to build out some amazing capabilities in Microsoft Stream. In this post you'll find a quick summary of some notable updates already available or coming very soon. These updates include Tier-C compliance, Spotlight videos on Home page, Cloud recording for meetings in Microsoft Teams, Edit transcripts and people timelines, and more. | Related: Microsoft Stream announces Tier C compliance and new capabilities

 

In June 2018, we're making changes to the native social capabilities in SharePoint Online

In June 2018, we're making changes to the native social capabilities in SharePoint Online. Office 365 includes two options for enterprise social features: Yammer and the SharePoint Newsfeed. The native SharePoint social features in SharePoint Online were designed to let people work together in ways that are most effective for them through providing great collaboration tools that anyone can use to share ideas, find people and expertise, and location business information. Over the course of the past 18 months we've introduced new capabilities designed to take advantage of the latest innovations across intelligence, mobile and more to deliver solutions that allow people to communicate more effectively from Office 365 Groups to Team News, Communication Sites, and Yammer. With these new innovations deployed globally we'll be making changes to the native social features in SharePoint Online. In June 2018 we'll make the company feed read-only in SharePoint Online and remove the option to implement the Newsfeed feature in navigation and through Tenant Administration. The company feed is an organization's public newsfeed. All posts appear to the company, including those created by people that users might not be following.

 

Exchange Server TLS guidance Part 2: Enabling TLS 1.2 and Identifying Clients Not Using It

In part 2 of our Exchange Server TLS Guidance series we focus on enabling and confirming TLS 1.2 can be used by your Exchange Servers for incoming and outgoing connections, as well as identifying any incoming connection which is not utilizing TLS 1.2. The ability to identify these incoming connections will vary by Windows Server OS version and other factors. Part 2 will not cover disabling TLS 1.0 or TLS 1.1, nor disabling older cipher suites from being used. Part 3 of the TLS guidance series will go into detail on those topics. For Part 2 of our TLS guidance series we assume you have already audited your on-premises Exchange Servers and applied all updates called out in Part 1: Getting Ready for TLS 1.2. Please perform the activities called out in Part 1 if you have not prior to moving forward with any configurations outlined in part 2.

 

NOTEWORTHY

 

Updated: Office 365 URLs and IP Address Ranges

April 2, 2018: Updates for Microsoft Teams, Office Online, Skype for Business Online and Sway effective May 1, 2018. Details on the updates are available on the RSS feed and the complete list is located here. | Additional Resources: Managing Office 365 Endpoints, Content Delivery Networks and Client Connectivity.

 

Introducing the Microsoft 365 Security and Compliance Center

As part of the Microsoft 365 vision and expanding on the unified administration experience we started with the Microsoft 365 admin center, we have created the Microsoft 365 security and compliance center. The Microsoft 365 security and compliance center maintains the centralized experience, intelligence, and customization that Office 365 security and compliance center offers today. In addition, it also enables data administrators, compliance officers, security administrators, and security operations to discover security and compliance controls across Office 365, Enterprise Mobility + Security, and Windows in a single place. Over the coming months, we will continue integrating and streamlining administration experiences across Microsoft 365. To help organizations optimize their resources we will add new capabilities to help deploy and manage security and compliance solutions. We will also continue to improve the efficiency of the security and compliance administrator's user experience, so they can complete their tasks quickly to get more done with their day. The Microsoft 365 security and compliance center is rolling out now. Once deployed, administrators can login as they usually do, or navigate to https://protection.microsoft.com to try out the new security and compliance experiences. In addition, they can also navigate to the Microsoft 365 security and compliance center from the Microsoft 365 admin center. Administrators will still be able to configure and manage their Office 365 security and compliance settings within the new Microsoft 365 security and compliance center.

 

A new architecture for Exchange hybrid customers enables Outlook mobile and security

We're announcing a new architecture for Exchange Server and Office 365 hybrid customers that unlocks Enterprise Mobility and Security (EMS) capabilities for Outlook for iOS and Android. With Hybrid Modern Authentication, Exchange customers can combine the power of Outlook with Azure Conditional Access and Intune App Protection Policies to securely manage corporate messaging on mobile devices. Once Exchange customers with servers on-premises establish a hybrid configuration with the Microsoft Cloud and enable Hybrid Modern Authentication with Office 365, Outlook for iOS and Android authenticates against Azure Active Directory and synchronizes the mailbox data in Exchange Online – the Outlook mobile client never connects with the on-premises Exchange environment – unlocking the power of Office 365, Outlook for iOS and Android and Enterprise Mobility + Security (EMS). Architected in the Microsoft Cloud, Outlook for iOS and Android is fully integrated with Azure Active Directory and Microsoft Intune. This means that organizations can enforce conditional access as well as application and device management policies while experiencing the richness of Outlook for iOS and Android. Now Exchange Server customers with hybrid modern authentication can use the cloud-backed capabilities of Outlook such as Focused Inbox, intelligent Search and enhanced time management to achieve more on their mobile device.

 

The Outlook for iOS and Android architecture is FedRAMP compliant for US Office 365 GCC customers

Microsoft is pleased to announce the updated architecture of Outlook for iOS and Android meets all the FedRAMP requirements which are FISMA compliant and based on the NIST 800-53 rev3. The security requirements have been approved by a third-party assessment organization and passed the independent audit assessment which gives Office 365 Government Community Cloud (GCC) customers the ability to adopt Outlook for iOS and Android. This update brings the necessary components of the Outlook for iOS and Android architecture into the accreditation boundary of Office 365 and enables GCC customers to accelerate the adoption of this secure mobile email and calendar solution. With controls in place to ensure mailbox data transfer and storage security practices are consistent with the most advance cloud solutions, US Office 365 GCC customers can also confidently use mobile application management tools such as Microsoft Intune to manage device access and mailbox policies for their mobile users.

 

Upcoming improvements to the Azure AD sign-in experience

We wanted to give you an early heads up on some visual design updates that are coming to the Azure AD sign-in experience. Customers gave us a LOT of feedback last time we updated the sign in. It was clear that you wanted us to provide more notification, earlier in the process with more information. We've learned and this time we're giving you more time and info than ever before. Since we released the redesign of the sign-in screens a few months ago, we've gotten feedback on how we can further improve the new UI. Our next set of changes aims to reduce clutter and make our screens look cleaner. A visually simpler UI helps users focus on the task at hand – signing in. This is solely a visual UI change with no changes to functionality. Existing company branding settings will carry forward to the updated UI. The changes include an updated layout and styling of UI elements, as well as moving all screens to the new sign-in experience. We plan to gradually roll these changes out to give you time to prepare for them. A timeline of what to expect in the upcoming months is included below:

  • Early-May: We'll release a notification banner on the existing sign-in page, so everyone has the opportunity to learn about and prepare for the change.
  • Mid-May: A link to preview the updated UI will be available in the notification banner. This preview gives you the opportunity to capture screenshots of the new UI if you need to update user documentation. When in the preview, users will have a link to switch back to the existing experience.
  • Mid-June: The updated sign-in UI will be made generally available. All users will default to the updated UI.


Office 365: Licensing mail users results in mailbox objects.

$
0
0

In Office 365 we allow administrators to create mail user objects.  A mail user object is a security principal in the local active directory that also has an external email address assigned.  The user will appear in the global address list as a recipient and when selected the messages sent to the external email address assigned to the user.

 

In recent weeks I have worked with customers that have begun the process of implementing automated license assignment or implementing group based licensing.  In the process of doing so their mail user objects were included within the licensing scope applied.  When the Exchange Online license was applied – the mail user objects were converted to mailbox objects causing interruptions in mail flow.

 

Here is an example of a mail user created on premises.

 

[PS] C:>Get-MailUser TestAssigned

Name                                     RecipientType
----                                     -------------
Test Assigned                            MailUser

 

When Azure AD Connect has replicated the object it will be represented in Exchange Online as a mail user object.

 

PS C:> Get-MailUser TestAssigned

Name                                     RecipientType
----                                     -------------
Test Assigned                            MailUser

 

When the mail user account is initially provisioned the account is not licensed.

 

PS C:> Get-MsolUser -UserPrincipalName testassigned@domain.com

UserPrincipalName            DisplayName   isLicensed
-----------------            -----------   ----------
TestAssigned@domain.com      Test Assigned False

 

Using the Office 365 Portal an Exchange Online license can be assigned to the mail user account.

 

PS C:> Get-MsolUser -UserPrincipalName testassigned@domain.com

UserPrincipalName            DisplayName   isLicensed
-----------------            -----------   ----------
TestAssigned@domain.com      Test Assigned True

 

Post license assigned the object is converted to a mailbox object within Exchange Online.

 

PS C:> Get-Mailbox testassigned

Name                      Alias                ServerName       ProhibitSendQuota
----                      -----                ----------       -----------------
Test Assigned             TestAssigned         cy1pr0601mb1626  49.5 GB (53,150,220,288 bytes
)

 

The external email address property of the mail user is not preserved – all email will now be delivered to the mailbox that was provisioned.

 

This is considered by design.  In Exchange Online the only objects that will not provision a mailbox when a license is assigned is an on premises mailbox.  This is denoted in Exchange Online by a user object that is replicated with an Exchange Guid.  (Note:  There is one exception to this – information can be found here:  https://blogs.technet.microsoft.com/timmcmic/2017/09/10/office-365-users-have-both-a-cloud-and-on-premises-mailbox/)

 

To correct the condition the license can be removed through the Office 365 Portal (or though any means that can remove the Exchange Online sku).

 

PS C:> Get-MsolUser -UserPrincipalName testassigned@domain.com

UserPrincipalName            DisplayName   isLicensed
-----------------            -----------   ----------
TestAssigned@domain.com      Test Assigned False

 

This will result in the object converting back to a mail user object and the external email address being applicable again.

 

PS C:> Get-MailUser testassigned

Name                                     RecipientType
----                                     -------------
Test Assigned                            MailUser

 

In order to preserve mail user functionality Exchange Plans should not be assigned when licenses are assigned to these recipient objects.

WaaS : Le 10 Avril 2018 est une date importante !

$
0
0

Nous sommes aujourd'hui le Lundi 9 Avril, veille du 10 Avril !

Demain est une date importante d'un point de vue Windows As A service car c'est la dernière mise à jour mensuelle qui sera disponible pour Windows 10 1511 Enterprise et pour Windows 10 1607 Home et Pro. Windows 10 1607 Entreprises a encore 6 mois de "supplemental Servicing" soit Octobre 2018.

Pour plus d'info :

https://www.linkedin.com/pulse/waas-le-10-avril-2018-est-une-date-importante-alexandre-vinson/

Pour rappel fin de support signifie aussi plus de mise à jour de sécurité.

 

Office 365: Internal forwarding and remote domains…

$
0
0

In Office 365 I often encourage customers to control user forwarding through the user of remote domains.  You can find two of my blog posts on this topic here:

 

https://blogs.technet.microsoft.com/timmcmic/2015/06/08/exchange-and-office-365-mail-forwarding-2/

https://blogs.technet.microsoft.com/timmcmic/2015/04/19/office-365-determine-accounts-that-have-forwarding-enabled/

 

In a default installation a remote domain is defined in the service only for the * domain. 

 

PS C:> Get-RemoteDomain

Name                           DomainName                                   AllowedOOFType
----                           ----------                                   --------------
Default                        *                                            External

 

The auto forwarding property can be adjusted to FALSE which prevents autoforwarding from succeeding even if configured by the end user.

 

PS C:> Get-RemoteDomain | Select-Object AutoForwardEnabled

AutoForwardEnabled
------------------
             False

 

I recently had a customer present with an issue where forwarding was not working.  In this particular instance forwarding was not working when a mailbox in the service created a rule or was using forwarding SMTP address to a user that was not yet migrated.  The recipients mailbox was located on premises in the same organization.  Let us take a look at an example…

 

In Office 365 I have changed the forwarding SMTP address to be a proxy address of an object that has not yet been migrated.  The forwarding smtp address property is treated in the same manner as an inbox rule setting a forwarding or redirect address.  (For a details explanation of the different forwarding methods see my previously mentions posts).

 

Set-Mailbox Contact -ForwardingSmtpAddress journal@contoso.com

 

When the forwarding address as been set all emails directed to this mailbox should also be redirected to the forwarding address.

 

To test the forwarding using an external mailbox outside the organization I addressed an email to the mailbox where forwarding was enabled.  Using message tracing I traced the message and noted that the status now shows as FAILED.

 

PS C:> Get-MessageTrace -RecipientAddress journal@contoso.com -SenderAddress timmcmic@microsoft.com

Received            Sender Address         Recipient Address       Subject Status
--------            --------------         -----------------       ------- ------
4/9/2018 2:41:08 PM timmcmic@microsoft.com journal@contoso.com     Test    Failed

 

Using get-messageTraceDetail we can review the full path of the message including any drops.

 

PS C:> $trace=Get-MessageTrace -RecipientAddress journal@contoso.com -SenderAddress timmcmic@microsoft.com | Get-MessageTraceDetail
PS C:> $trace

Date                   Event                Detail
----                   -----                ------
4/9/2018 2:41:11 PM    Journal              Message was journaled. Journal report was sent to journal@contoso.co...
4/9/2018 2:41:11 PM    Drop                 Reason: [{LED=250 2.1.5 RESOLVER.MSGTYPE.AF; handled AutoForward address...
4/9/2018 2:41:11 PM    Drop                 Reason: [{LED=250 2.1.5 RESOLVER.MSGTYPE.AF; handled AutoForward address...
4/9/2018 2:41:11 PM    Spam Diagnostics

 

In this instance we can see two drop events have occurred.  We can review the specific details of one of the drop events.

 

PS C:> $trace[1].detail
Reason: [{LED=250 2.1.5 RESOLVER.MSGTYPE.AF; handled AutoForward addressed to external recipient};{MSG=};{FQDN=};{IP=};{LRT=}]

 

This is interesting – the message trace details seem to indicate that the message was dropped to an external recipient.  Why?

 

In this case the message is leaving the office 365 organization and therefore is considered external even though the message is destined to the on premises organization.  Due to transport seeing the forwarding recipient as external – and the remote domain * being the only domain defined and having auto forwarding disabled – the message is turfed.

 

How can we fix this?

 

We can fix this by defining a remote domain for the internal namespace.

 

PS C:> New-RemoteDomain -DomainName contoso.com -Name "Contoso Remote Domain"

Name                           DomainName                                   AllowedOOFType
----                           ----------                                   --------------
Contoso Remote Domain          contoso.com                                  External

 

PS C:> Get-RemoteDomain "Contoso Remote Domain" | fl autoForwardEnabled

AutoForwardEnabled : True

 

The entire process can be retested after allowing time for replication.  In this instance we will observe that the forwarded message is now delivered.

 

PS C:> Get-MessageTrace -RecipientAddress journal@contoso.com -SenderAddress timmcmic@microsoft.com  | where{$_.status -like "Delivered"}

Received            Sender Address         Recipient Address       Subject Status
--------            --------------         -----------------       ------- ------
4/9/2018 3:06:30 PM timmcmic@microsoft.com journal@contoso.com     Test    Delivered

 

 

PS C:> $trace=Get-MessageTrace -RecipientAddress journal@contoso.org -SenderAddress timmcmic@microsoft.com  | where{$_.status -like "Delivered"} | get-messageTraceDetail
PS C:> $trace

Date                   Event                Detail
----                   -----                ------
4/9/2018 3:06:33 PM    Journal              Message was journaled. Journal report was sent to journal@contoso.co...
4/9/2018 3:06:33 PM    Journal              Message was journaled. Journal report was sent to journal@contoso.co...
4/9/2018 3:06:33 PM    Journal              Message was journaled. Journal report was sent to journal@contoso.co...
4/9/2018 3:06:34 PM    Send external        Message sent to mail.contoso.com at IPAddress using TLS1.2 w...
4/9/2018 3:06:33 PM    Spam Diagnostics

 

The new remote domain settings are being applied as expected.

Viewing all 36188 articles
Browse latest View live