Quantcast
Channel: TechNet Blogs
Viewing all 36188 articles
Browse latest View live

Microsoft Office 365 x Superhub x 漢華專業服務集團

$
0
0

Microsoft Office 365 x Superhub x 漢華專業服務集團

Office 365助你控制營運成本!

 

漢華專業服務集團是一個全方位的金融服務機構。分為三大版塊,第一是資產,包括資產評估和有關資產的顧問服務;另一方面是企業服務,以企業為對象,提供不同的企業服務和企業顧問;第三是金融服務。這次訪問應用了Superhub的漢華,讓大家更了解Microsoft的服務。

漢華主席葉國光博士表示使用了Office 365之後,覺得方便了許多,提高了跟其他寫字樓的溝通效率。

漢華首席營運總監諸曉峰亦指出,他們已經使用以前的伺服器一段時間了。發現了幾個問題,第一個便是存量的問題,因為由他們管理一個伺服器,存量是有限制的。當購買新的伺服器或軟件的時候,需要額外的資金,對公司的業務或營運上會有一點壓力。而Office 365是透過月費的形式去收費,這是一個非常靈活的成本控制方法,對公司營運是非常有利的。

漢華系統工程師林志鋒表示公司大約有70個郵箱 ,全部的大小約接近300 GB,整個搬遷的過程大約使用了三日的時間。Superhub 提供了很多支援,由搬遷開始至進行中至整個搬遷過程完結,都會有詳細的報告、隨時更新當中的進度,從而節省了很多時間。

Superhub產品及市場策劃總監高禎指出,Superhub已經擁有八年的雲端技術服務經驗,是行內的領導者之一。Superhub與Microsoft 香港一直保持緊密的合作關係,目前擁有超過十四萬的合約用戶。對每一個用戶,他們都希望可以提供專業、優質和及時的服務,希望幫助客戶解決問題。

他們為漢華專業服務提供Office 365方案;因應他們電郵和保安的各種需求,提供一個度身訂做的方案,在IT支援方面作出了一定的承諾和表現,令他們不用再分心於科技上的問題,專注於他們核心營運。

其他相似案例

隨時隨地 即時同步工作

了解更多 >

中港互通 電郵零阻隔

了解更多 >

7x24技術支援專業、優質、及時服務

了解更多 >


Microsoft Office 365 x Superhub x昌興

$
0
0

Microsoft Office 365 x Superhub x昌興

Office 365有助中港兩地無礙溝通

 

昌興已有一百年歷史,由1917年直到現在,主要經營範圍是家庭用品,包括西德孖人牌和法國 Staub,都是孖人牌的產品,亦有經營美國康寧產品。

昌興有限公司行政總裁余壽寧先生表示一般而言通訊對中港業務而言是最重要的一環,以前他們的電郵若遇到防火牆,便會導致電郵接收緩慢甚至系統無法正常運作,令到同事工作無法得心應手,在急需的情況下溝通不便。而Microsoft 有很強的防毒系統,他們有信心,不會擔心電腦中毒,電郵存量也可以比較多。

Superhub提供24X7的技術支援服務,任何時間都可以得到支援。Superhub和Microsoft 之間的合作關係亦令昌興更有信心,令他們覺得跟Superhub合作如同跟Microsoft合作一樣。

Office365可在手提電話及平板電腦查看所有電郵資料。即使遺失手提電話,所有電郵都能再次查閱,不會遺失。另外,由於他們有多會議和活動,使用共用行事曆,可隨時隨地輕鬆安排行程,方便第三者如秘書安排時間更新行程,非常方便。余先生表示很多老闆和朋友都喜歡使用這類型的系統。總括而言,對於Office365、Superhub和昌興人員的配搭,昌興都是示有信心和滿意的。

其他相似案例

隨時隨地 即時同步工作

了解更多 >

靈活收費 彈性控制成本

了解更多 >

7x24技術支援專業、優質、及時服務

了解更多 >

WmiPrvSE.exe のアプリケーション エラーについて

$
0
0

皆さん、こんにちは。
Windows サポート チームです。

今回は、弊社で把握している Windows Server 2012 R2、及び、Windows Server 2016 において、同時に大量の WMI クエリーを発行した際に発生する可能性のあるアプリケーション エラーについてご紹介します。

Windows Management Instrumentation (WMI) はシステムの管理情報を操作、参照するためのインターフェースを提供している Windows OS の基幹となるサービスです。
そのため、サーバーやクライアントの監視を行っているアプリケーションでは、WMI サービスを使用して管理情報を収集されていることが多いかと思います。

同じ WMI クラスに対する WMI クエリーが同時、かつ大量 (目安としては 50 ほど) に発行されると、WmiPrvSE.exe がアプリケーション エラー (イベント ID: 1000) に至ってしまう可能性があります。


※例外コードが 0xc00000fd と異なるアプリケーション エラーの場合は、本事象には該当しません。

WMI クエリーが発行されると、WMI クラスに対応したプロバイダーをロードしている WmiPrvSE.exe 内のキューにリクエストが追加されていきます。
そして、WmiPrvSE.exe がキューに追加されたリクエストを順次処理します。
しかし、WmiPrvSE.exe が 1 つあたりのリクエストを処理する速度よりも、キューにリクエストが追加されていく速度が上回ってしまった場合、キューにリクエストが増え続けてしまい、スタック オーバーフローの例外 (0xc00000fd、STATUS_STACK_OVERFLOW) が発生し、アプリケーション エラーに至ります。

例として、弊社では以下の手順でスタック オーバー フローが発生することを確認しております。

1. 下記スクリプトを Ps1 ファイルとして保存します。

while(1){
gwmi -query "select * from Win32_TerminalService"
}

2. 作成したスクリプトを複数の PowerShell 上で実行します。

3. スクリプトを実行し続けるとエラーが発生し始めます。

4. エラー発生時のイベントを確認すると、スタック オーバーフローの例外 (0xc00000fd、STATUS_STACK_OVERFLOW) が発生しています。

本事象については、次期バージョンでの改善を検討中です。
現時点で本事象を回避するためには、同時に発行される WMI クエリーの数をご調整ください。

Navigating the unknown: how to sell on social media

$
0
0

In 1977, the two Voyager space probes were launched. One was sent towards Jupiter, and the other towards Saturn. On board both were the Golden Records.

The Golden Records had information about us. Music and photos and insight - all the information aliens might need if they wanted to get in touch. In 2012, Voyager 1 entered interstellar space. It's been further than anything mankind has ever made. And the Golden Record is still on board.

We have no idea if it'll ever be seen again - let alone if any aliens will ever use it to find us. All there is to do is wait.

Today, it seems crazy that we could put information "out there" and maybe one day someone would be in touch. But why? We still do it every day.

Today's Golden Records

Look at social media. Every day, hundreds of thousands of Tweets are tweeted, Facebook posts are posted, and LinkedIn updates are updated. That's a lot of information we're sending out, floating around "out there". And we have no idea if it'll ever be seen - let alone if we'll ever hear back. It's especially tough for businesses, who are relying on more than just a dopamine hit from their status updates.

The content you share with followers, customers, and prospects really needs to count. And unlike the Golden Records, there's no time to wait. When it comes to social selling, you need to know the information you're sending out is finding someone who can use it.

Making contact

A lot goes into a good social selling strategy. It starts with having a strong personal brand (you can make a really good start with your personal brand by following the advice in our 5-minute guide). But once you've got a professional photo and a curated stream of great content, you've got to make sure you're reaching the right people with the right messages.

You need to target, understand, and engage your customers.

On LinkedIn, Sales Navigator does just that - it's the best version of the social network for salespeople. It helps you track important contacts, recommends leads, and lets you harness the combined reach of your entire network. But when it comes time to reach out to the prospects and customers you've found, PointDrive adds even more value.

 

 

Get your content together. PointDrive puts your content into a magazine-style message that looks good on any device. From there, you can broadcast on all known frequencies.

 

 

 

 

Send it to the right people. With PointDrive, you're not sending information "out there". Beam all your research and insights from Sales Navigator into PointDrive, and know the right content reaches the right people.

 

 

 

 

 

 

 

Track its success. PointDrive will track which content your prospect has read, so you can pinpoint what they're interested in. It'll even tell you who they've sent it on to in their organisation, so you can confidently venture onwards.

 

 

 

 

 

We've come a long way since 1977. And the Golden Records may never be discovered by an alien species. But at least your prospects' organisations no longer have to be a great unknown.

Make sure what you're putting out there is reaching the right people, with Sales Navigator and PointDrive. Find out more here.

The Adventure Begins: Plan and Establish Hybrid Identity with Azure AD Connect (Microsoft Enterprise Mobility and Security)

$
0
0

Greetings and salutations fellow Internet travelers! Michael Hildebrand here...as some of you might recall, I used to pen quite a few posts here, but a while back, I changed roles within Microsoft and 'Hilde - PFE' was no longer.

Since leaving the ranks of PFE, I've spent the last couple of years focused on enterprise mobility and security technologies. Recently, I was chatting with the fine folks who keep the wheels on this blog when I asked "Hey – how about a series of guest-posts from me?" They said if I paid them $5, I could get some air-time, so here we are.

My intentions are simple - through a series of posts, I'll provide high-level discussion/context around the modern Microsoft mobility and security platform to "paint you a picture" (or a Visio) of where we are today then I'll move on to 'the doing.' I'll discuss how to transform from 'on-prem' to 'hybrid-enabled' to 'hybrid-excited.' I'll start that journey off in this post by establishing the foundation - hybrid identity – then, in subsequent posts, I'll work through enabling additional services that address common enterprise scenarios. Along the way, I'll provide job aids, tips and traps from the field.

It continues to be a very exciting time in IT and I look forward to chatting with you once more. Let's roll.

Azure AD – Identity for the cloud era

The hub of Microsoft's modern productivity platform is identity; it is the control point for productivity, access control and security. Azure Active Directory (AAD) is Microsoft's identity service for the cloud-enabled org.

If you want more depth (or a refresher) about what Azure Active Directory is, there's no shortage of content out there. I'll be lazy and just recommend a read of my prior post about "Azure AD for the old-school AD Admin." It's from two years ago – which makes it about 2x older in 'cloud years' – and as such, it suffers a bit from 'blog decay' on some specifics (UIs and then-current capabilities), but the concepts are still accurate. So, go give that a read and then come on back … I'll wait right here for you.

The Clouds, they are a-changin'

As an "evergreen" cloud service, AAD sees continuous updates/improvements in the service and capability set. Service updates roll out approximately every month – so, we're at around 36 +/- AAD service updates since my Jan 2015 article.

To stay on top of AAD updates, changes and news, the EMS blog (Link) is always a good first stop.

If you like "Release Notes" style content, starting last September (2017), the 'What's new in AAD' archive is available - https://docs.microsoft.com/en-us/azure/active-directory/whats-new.

Recently, a change to the AAD Portal homepage added a filterable 'What's new in Azure AD' section –

Also, the O365 Message Center has a category for "Identity Management Service" messages:


An Ambitious Plan

Here's the plan for this post, this series and some details about my "current state" environment:

  • I'm starting out with an on-prem, single AD forest w/ two domains (contoso.lab and corp.contoso.lab)
    • Basically, the blue rounded-corner box in the Visio picture above:

  • In this post, I'm going to establish a hybrid identity system, and bridge on-prem AD to an AAD tenant via Azure AD Connect (AAD Connect)
    • Choose password hash for the authentication method
      • This enables password hash sync from AD to AAD
    • Filter the sync system to limit what gets sync'd from AD to AAD
    • Prepare AD for eventual registration of Domain-Joined Windows PCs from AD to AAD
  • In subsequent posts, I'll build on this foundation, covering topics such as custom branding for the cloud services, self-service password reset, device registration, Conditional Access and who knows what other EMS topics.
    • I'll be assigning homework, too, lest yee not fall asleep
  • I'll end up with an integrated, hybrid platform for secure productivity and management
  • These are pretty bold ambitions – but we'll get there, and the beauty of the cloud services model is that "getting there" isn't nearly as hard as that list makes it seem.

Now let's get down to brass tacks. For the rest of this post, I'll focus on considerations, planning and pre-reqs for getting Azure AD Connect up and running and then I'll walk through the setup and configuration of AD and AAD Connect to integrate an on-prem AD forest with an on-line AAD tenant.

  • If you already have AAD Connect up and running, KUDOS! Read-on, though, as you might find some helpful tips or details you weren't aware of or didn't consider.

NOTE – As with most blogs, this isn't official, sanctioned Microsoft guidance. This is information based on my experiences; your mileage may vary.

Overall AAD Connect Planning

Microsoft has done a lot of work to gather/list pre-reqs for AAD Connect. Save yourself some avoidable heartburn; go read them … ALL of them:

AAD Connect has two install options to consider – Express and Custom: https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-select-installation

  • The Express install of Azure AD Connect can get you hybrid-enabled in around 4 clicks. It's easy and simple - but not very flexible. Express setup requires an Enterprise Admin credential to perform all of the AD changes and you don't have a lot of control over those changes (i.e. naming service accounts, where in AD they go, which OUs get permissions changes, etc).

  • The Custom install of Azure AD Connect provides more flexibility, such as allowing you to pre-create the service accounts (per your AD naming/location standards) as well as assign scoped AD permissions as part of the pre-work before installing AAD Connect.

Consider AAD Connect 'Automatic Upgrade' to keep AAD Connect up-to-date automatically:

Service accounts

AAD Connect uses a service account model to sync objects/attributes between AD and AAD. There are two service accounts needed on-prem (one for the sync service/DB and one for AD access) - and one service account needed in AAD.

Service account details:

  • Sync service account - this is for the sync service and database

  • AD access service account - this is a Domain User in the AD directory(ies) you want to sync.
    • An ordinary, low-privilege Domain User AD account with read access to AD is all that is needed for AAD Connect to sync AD to AAD for basic activities.
    • There are notable exceptions that require elevated permissions and two I'll cover here are password hash sync and password writeback (for self-service password reset/account unlock)

    TIP - Create your AD access service account in AD and assign any custom permissions to it BEFORE you install AAD Connect.

    TIP - This account itself doesn't need to sync to AAD and can/should reside in a 'Service Account' OU, with your other service accounts, filtered from sync.

    TIP – Make sure you secure, manage and audit this service account, as with any service account.

  • AAD cloud access account
    • This is a limited, cloud-only account in Azure AD, created by the AADC install process, which sets a long, complex password that is set to not expire.

    TIP - The username of this account is derived from the AAD Connect server name

    • For example, my AAD Connect server is named "CORP-AADC01" so the AAD service account ID will be something like "Sync_CORP-AADC01_1@mycorp.onmicrosoft.com"

    TIP – This account won't be seen anywhere in AD; it's only part of AAD and the sync system. You can see it in the configuration pages of the Synchronization Service Manager tool - screen snip below.

    • The Synchronization Service Manager tool is sometimes used for advanced sync settings and is out of scope for this article; I strongly urge you to not wander around in there.

    TIP - The ID can also be seen in the AAD portal 'Users' section.


Planning on-prem sync filtering

You can limit what users, groups, contacts and devices are sync'd between on-prem AD and Azure AD. This is known as 'filtering' and can be done based on forest, domain, OU or even object attribute values. Also, for a pilot or PoC, you can filter only the members of a single AD group.

TIP – Thoroughly plan/test a sync filtering strategy to understand what will/won't sync. In prod, do it once; do it right.

Read this link for more information/details about sync filtering:

Points to consider:

  • Not everything in AD is sync'd, even if you don't filter –
    • For example, DNS zones don't get sync'd. GPOs don't get sync'd. Objects with the "isCriticalSystemObject" attribute equal to "true" won't sync – so many sensitive AD objects won't sync (i.e. Domain Admins group in AD)
    • However, unless filtered, some objects may sync that you don't need/want in AAD (i.e. the DNS Admins group in AD, your service account OU, etc.)
  • Any OU that has/will have Windows 10 PCs that you want to register/sync to AAD (called 'Hybrid Azure AD Join') should be selected for sync, as Azure AD Connect plays a part in sync'ing Win 10 PCs to Azure.
    • Azure AD Connect does not play a part in sync'ing pre-Win 10 PCs; they can sync/register in AAD on their own (after you install an update/MSI to those OSes), regardless of their OU being targeted or not
    • We'll get into the weeds of Hybrid Azure AD Join, AAD Join and Azure Device Registration Service in a later post
  • For a pilot, you can simplify what gets sync'd by selecting a single group in AD to sync
    • Use a "flat" Global Security group in AD; any nested groups within it won't sync
    • If you also setup OU filtering, be sure the target group and its members (users, Windows 10 PCs, etc.) are all in OUs that are in-scope for sync – OU filtering is evaluated before the group filter.
    • You can't browse for the group via the wizard – you need to type in the group name or DN attribute (the 'resolve' button will verify it, though)
    • The UI option to filter by group only appears in the initial setup of AAD Connect. If you don't select it during the first run, it won't show up in the UI in subsequent runs of the tool.

    TIP – Group-filtered sync isn't supported for production implementations

  • New OUs/subOUs that are created after you've setup your sync filtering in AAD Connect may be sync'd by default. If so, this may be an unwelcome surprise.
    • I'll cover more on this later in the AAD Connect configuration section

UPNs and email addresses – should they be the same?

In a word, yes. The best experience for your users (seamless SSO with minimal login prompts or pop-ups/sign in errors, etc.) will be achieved when the on-prem UPN matches the AAD UPN, as well as the primary email address (and SIP address for overall consistency). This assumes there is an on-prem UPN suffix in AD that matches the publicly routable domain that your org owns (i.e. ... @microsoft.com).

"Ok, but is it required?" No, but over time, it will make lives better with less confused users who make fewer helpdesk calls and are happier with IT.

Points to consider:

  • Recall the pre-requisites doc/link – it lists a line-item to add any custom domain(s); go through the process to add and 'verify' your public domain names (called 'custom' domains in O365/AAD) before setting up AAD Connect. There is a step during AAD Connect setup that will poll on-prem AD for UPN suffixes and AAD for matching verified custom domains. This is visible in my step-by-step later.
  • To avoid additional work and potential issues, it is strongly recommended that you address UPN/ID issues BEFORE you install AAD Connect

AAD Connect – Install and configuration

I basically break this phase up into three sections:

  1. AAD Connect server setup/tools install
  2. On-prem AD config
  3. Initial sync config
  4. AAD Connect server setup and Tools Install
    1. On my AAD Connect server (these steps are for a WS 2012 R2 x64 instance – again, read all the AAD Connect pre-reqs from the link above; your specific steps may vary):
      1. Disable IE Enhanced Security Config and enable Cookies in the IE browser settings
      2. Install the RSAT AD tools – via Server Manager or PowerShell <from elevated PoSh>
        1. Add-WindowsFeature RSAT-AD-Powershell
      3. Download and update to WMF 5.0 then install AAD PowerShell v1
          1. Reboot
        1. Open elevated PowerShell and run Install-Module -Name PowerShellGet -Force
        2. From same PowerShell console, run Install-Module -Name MSOnline
      4. Download AAD Connect (AzureADConnect.msi) and install it on the target AAD Connect server
        1. https://www.microsoft.com/en-us/download/details.aspx?id=47594
      5. As soon as the install completes, the AAD Connect configuration wizard will auto-initiate – don't run through it; exit/close out of the tool/wizard.
      6. The AAD Connect setup installs the sync service and several pre-reqs, and copies some PowerShell scripts/functions locally
  5. On-prem AD config
    1. Prepare on-prem AD for Azure AD integration (I'll also initialize AD for Azure AD Device Registration Service – AzDRS)
      1. Use PowerShell to establish the Service Connection Point (SCP) object and associated attributes in AD - More info
      1. This process creates an object in on-prem AD with pointers to the associated on-line AAD tenant name and GUID – this information is used by several AD <-> AAD integrations such as AAD device registration, device write-back, etc.
        1. For example, this information is used by Windows domain-joined PCs to "find" the connected AAD tenant and register there (aka "Hybrid Azure AD Join.")
      2. From the AAD Connect server:
        1. Run a PowerShell window as an Enterprise Admin account (this process needs to create a container in the Configuration partition in the AD forest):
        2. Import-Module -Name "C:Program FilesMicrosoft Azure Active Directory ConnectAdPrepAdSyncPrep.psm1" <press enter>
        3. Initialize-ADSyncDomainJoinedComputerSync <press enter>
        4. PowerShell will prompt for AdConnectorAccount : enter the AD access service account and press enter
          1. The format is "domainID" - CORPSRV-AADC
        5. A logon box will pop-up; enter the AzureADCredentials

          1. This should be a Global Admin ID from Azure AD
          2. The format is upn-style - admin@woodgroove.onmicrosoft.com
      3. Verified results:


  1. Review/verify/edit the AD access service account has permissions for the desired Azure AD services/features (see above Service Accounts section)
    1. Remember, password hash sync and self-service password reset (SSPR) each require unique manual permissions edits in AD    
      1. This is a commonly missed step or not done correctly

TIP - You can enable SSPR/pwd writeback without enabling password hash sync; you can offer your users self-service password reset even if you're not ready to sync passwords to Azure AD.

  1. Initial Sync config

    Let's take a breath, pause and recap: AAD Connect is installed and several on-prem decisions and configurations have been completed (sync filtering decisions, service accounts created, custom permissions assigned, 'Service Connection Point' container created and verified in AD, etc.).

    1. Next, I establish the core AD > Azure AD sync configuration and start actually sync'ing objects to AAD.
      1. From the AAD Connect server, launch the AAD Connect tool/wizard, agree to the license terms checkbox and click 'Continue.'
      2. We're doing 'Customize' (vs 'Express') for the reasons mentioned above (i.e. more flexibility in creating/naming/locating the service accounts)

      1. On the "Install required components" screen, leave all boxes blank – AAD Connect will setup the sync service and a 'virtual' service account on the AAD Connect server. This ID and password are system-managed and won't require any on-going management. Click 'Install.'

      1. Next, select the User sign-in/authentication method. My thinking has evolved over time on this aspect. I started out favoring federation with ADFS and on-prem passwords/auth, then I moved on to "Pass-through authentication" (PTA) and on-prem passwords/auth (I still really like PTA if there's a need to keep password hashes on-prem).

        However, now I've seen the light and "Password Synchronization" is my preferred choice. This is by far, the simplest solution and I'm comfortable w/ the security of password hash sync/storage. This is usually referred to as 'password hash sync' or PHS since AAD Connect takes the on-prem password hash value, processes it with additional hashing, then syncs that value to AAD. Also, with PHS, I get more complete coverage from the AAD Identity Protection capability and Azure-cloud levels of high-availability.

        Here's a great blog about the auth choices and decision: Sam D's auth choice blog.

        1. Also select the check box to "Enable single sign-on"

      1. On the "Connect to Azure AD" screen, enter an Azure AD global admin account (which isn't saved; it's only used during setup). Use a cloud-only ID from the tenant – i.e. admin@mycorp.onmicrosoft.com. This sets up the Azure AD tenant for sync and creates the AAD cloud access service account mentioned above in the service accounts section.


  1. On the "Connect your directories" screen, select/verify the target AD forest(s) and click "Add Directory" then select to "Use existing AD account." Enter the AD access service account credentials (from the above service accounts section) and click OK, then click Next.

TIP – You don't select the specific domains/OUs you want to sync here; that's done in a later step

  1. Review/select the Azure AD sign-in configuration - hopefully keeping the default which sets the on-prem UPN value as the login ID for Azure AD.

TIP – In the long red box above, you see I have a UPN suffix in AD that matches a verified custom domain name that I registered in my AAD; this is due to the pre-work that I mentioned in the UPN section above.

TIP - If you haven't verified a custom domain, you'll see an option to 'Continue without any verified domains' (i.e. for a test or PoC environment)

  1. On the "Domain and OU filtering" screen, select "Sync selected domains and OUs" and select the domains/OUs to sync - or select "Sync all domains and OUs" if that's how you want to roll.
    1. Remember, even if an entire forest/domain is selected, not everything in the domain will sync.

Repeated TIP – Thoroughly plan/test a sync filtering strategy to understand what will/won't sync. In prod, do it once; do it right.


TIP – As mentioned above in the sync planning section, recall that as/if new OUs/subOUs are created, they might be sync'd to AAD automatically.

Here's how to adjust your sync settings to control new OU sync:

The checkbox "state" in this UI indicates if new OUs will sync or not:

  1. If you DO NOT want subsequent new sub OUs to sync (my personal preference), clear all the check marks then click the deepest level, specific OU boxes you want to sync. The parent domain and OU box(es) will flip to solid gray, without a checkmark
    1. In this state:
      1. Only the selected OUs under CORPORATE will sync (white box with black checkmark)
      2. New OUs created anywhere will not sync

    1. If you DO want subsequent new sub OUs to sync, click the parent domain/OU box so it has a black checkmark (all sub-OUs will also get checked). Now, de-select the sub OU box(es) you don't want to sync, leaving the desired OUs checked. The parent OU box will turn gray with a black checkmark.
      1. In this state:
        1. The selected OUs under CORPORATE will sync (white box with black checkmark).
        2. New sub OUs created under the corp.contoso.lab domain and/or the CORPORATE OU will sync

    2. You can also configure a mixed state:
      1. In this state:
        1. New sub OUs created directly under corp.contoso.lab will not sync (gray box without black checkmark)
        2. The selected OUs under CORPORATE will sync (white box with black checkmark).
        3. New sub OUs created under the CORPORATE OU will sync (gray box with black checkmark)

        Example:

  • New 'Sync-Test-OU' was created in AD.
  • The new 'Sync-Test-OU' was added to sync filtering without making any changes to AAD Connect

TIP – Recapping:

  • White box without checkmark – won't sync
  • White box with black checkmark – will sync
  • Gray box without checkmark – new sub OUs won't sync
  • Gray box with black checkmark – new sub OUs will sync
  1. Review the unique identifier page for the sync configuration – the default is fine for my setup. Click Next.

  2. On the "Filter users and devices" screen, choose 'Synchronize all users and devices'

TIP – Even though the UI states this will synchronize all users and devices, that isn't really what happens. This option will sync all users, groups, contacts and Win 10 computer accounts "within the scope of any filtering you defined."

  1. If you decided earlier that you want to use group filtering for sync (i.e. for a PoC), you choose 'Synchronize selected' here and enter the group name or DN and click 'resolve' to verify it.
    1. If you don't see this screen or if you are considering this, review the above details about group filtering – it is a common area of confusion and unexpected results/behavior.

On the "Optional features" screen, verify all "Optional features" except Password synchronization are blank and click Next.

  • The "Password synchronization" option is checked and grayed out due to the earlier selection of "Password synchronization" for User sign-in.

  1. On the "Enable single sign-on" screen, click 'Enter credentials' then enter Domain Admin credentials for the domain(s) where your SSSO users reside (don't be confused like I was when the pop-up asked for "Forest Credentials" – it's asking for a Domain Admin ID).
  2. Click OK. Then click Next.

    1. This step creates a computer account called "AZUREADSSOACC" and puts it in the built-in COMPUTERS container in the target domain(s).
    2. Don't pre-create this account – let AAD Connect do it, as it populates some specific attributes/values for this computer account.
      1. You can move the computer account to an OU of your choice and I'd recommend you configure it for protection from accidental deletion (right-click > properties > object tab).

  3. On the "Ready to configure" page, verify the 'Start the synchronization process…' option is checked (default) and click 'Install.' Click Exit after the 'Configuration complete' page displays.

  4. Review the Application Event Log on the AAD Connect server for related events.

  5. Sign in/refresh the Azure/AAD portal
    1. Verify sync by looking for your targeted on-prem objects in AAD and review the Azure AD Connect section of the Azure/AAD portal for successful sync messages.
      1. On-prem users sync'd are listed with a 'SOURCE' of 'Windows Server AD'
      2. On-prem groups sync'd are listed with a 'Membership type' of 'Synced'

TIP - Subsequent delta synchronizations occur approx. every 30 min (and every 2 min the password hash sync process runs, if you've enabled it); previous versions.

TIP - You can easily trigger a sync via PowerShell at any time. I use a quick one-liner straight from the 'Run' dialog box on my AAD Connect server after making on-prem AD changes that I want to sync right away:

powershell –ExecutionPolicy Bypass Start-AdSyncSyncCycle

TIP – To avoid surprises with Automatic Upgrade of AAD Connect, now is a good time to review/verify the state of it for your AAD Connect via PowerShell:

Get-ADSyncAutoUpgrade

HOMEWORK – Go school yourself about AAD Connect Health – I think you'll like it

If you're a visual person, like me, here's where we are on our plan:

Ok folks, there you have it … a brief refresher on AAD as the ID hub of our modern productivity and security platform, a sizeable collection of "points to consider" when planning AD sync and then a walk-through of setting up AAD Connect to hybrid-enable a sample Active Directory forest.

Hopefully, that level of detail was helpful.

Tune in next time when I'll continue the march towards 'hybrid-excited.'

Cheers!

"Welcome back, (Hilde) Kotter"

P.S. Did anyone catch how the title of this post pays homage to the awesome movie "Remo Williams: The Adventure Begins"?

Windows 10: Using CopyProfile for the “Start Menu” has been deprecated.

$
0
0

Applies to:

Windows 10 1803 ((tbd))

Windows 10 1709 (Fall Creators update)

Windows 10 1703 (Creators update)

Windows 10 1607 (Anniversary update) / Windows Server 2016

Windows 10 1511 (November update)

Windows 10 1507 (RTM)

[Problem description]

  • Start Menu does not work at all.
  • Start Menu (and cortana) will become unresponsive, leaving users without a working Start Menu

Q:  Moving forward we can no longer expect CopyProfile to set the Start Layout, pined Items and customized backgrounds? Is that correct?

A:  That is correct.  "The Start Menu Product Group does not support customizing the Start layout with copyprofile."

“Using CopyProfile for Start menu customization isn't supported. Here are the ways to manage custom Start layouts in Windows 10:

Source:

Customize the Default User Profile by Using CopyProfile
https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/customize-the-default-user-profile-by-using-copyprofile

Q:  Are other portions of CopyProfile still supported in Windows 10?

A:  Yes, everything in CopyProfile except the “Start Menu” and “Taskbar” layout are supported in Windows 10.

[Solution]

Q:  What’s the alternative to customize the “Start Menu” during a deployment or imaging?

A:  Use the Group Policy:

User or Computer / Administrative Templates/Start Menu and Taskbar/Start Layout

Source:

Configure Windows 10 taskbar

Manage Windows 10 Start and taskbar layout

Pimp my Windows 10 – Business Customization Reference

https://blogs.technet.microsoft.com/ash/2016/03/07/pimp-my-windows-10-business-customization-reference/

Q:  How does it work?

A:  Import-Startlayout modifies the default user profile. All new users that login after import-startlayout has been run will get the new StartLayout.

Q:  I want to be able to 'force' some items to be pinned but I also want my end-users to be able to customize their own apps in the Start Menu.

A:  To be able to get the end-users to pin their own Start-menu items, there is a ‘Partial Lockdown’ where you need to specify “OnlySpecifiedGroups”

Locate the <DefaultLayoutOverride> section and add a parameter as detailed below.

<DefaultLayoutOverride LayoutCustomizationRestrictionType=”OnlySpecifiedGroups”>

Source:

Customize and export Start layout
https://docs.microsoft.com/en-us/windows/configuration/customize-and-export-start-layout

Windows 10 Start Layout Customization

https://blogs.technet.microsoft.com/deploymentguys/2016/03/07/windows-10-start-layout-customization/

Yong (Hailing from Baton Rouge, Louisiana).

Windows 10 v1607 – DualScan behavior when "Do not allow update deferral policies to cause scans against Windows Update" is set.

$
0
0

Applies to:

Windows 10 1803 ((tbd))

Windows 10 1709 (Fall Creators update)

Windows 10 1703 (Creators update)

Windows 10 1607 (Anniversary update) / Windows Server 2016


Does not apply to:

Windows 10 1511 (November update)

Windows 10 1507 (RTM)


While I was in Oxnard, California. the following topic came up.

Before you read this, you want to make sure that you go through:

Demystifying “Dual Scan”

https://blogs.technet.microsoft.com/wsus/2017/05/05/demystifying-dual-scan/

Improving Dual Scan on 1607

https://blogs.technet.microsoft.com/wsus/2017/08/04/improving-dual-scan-on-1607/

Using ConfigMgr With Windows 10 WUfB Deferral Policies

https://blogs.technet.microsoft.com/configurationmgr/2017/10/10/using-configmgr-with-windows-10-wufb-deferral-policies/

Once you have read the 3 blog post above, continue for this particular issue.

[Problem description]

If you have “Windows 10 1607” deployed in Semi-Annual Channel (used to be known as (u.t.b.k.a) Current Branch for Business (CBB)).

And have:

  • WSUS set to not deploy “Windows 10 1703” or “Windows 10 1709”.
  • According to Improving Dual Scan on 1607, KB4034658 (August 2017 Cumulative update) introduces a new GPO ("Do not allow update deferral policies to cause scans against Windows Update").

https://blogs.technet.microsoft.com/wsus/2017/08/04/improving-dual-scan-on-1607/

And if you have the October 2017 Cumulative update KB4041691 installed.

And these other hotfixes:

KB3186568

KB4013418

KB4023834

KB4033637

KB4035631

KB4038806

KB4051613

  • And we have set the "Do not allow update deferral policies to cause scans against Windows Update" in the registry:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINESOFTWAREPoliciesMicrosoftWindowsWindowsUpdate]

"DisableDualScan"=dword:00000001

Expectation:

              When going to “Windows Update”

              When clicking on “Check online for updates from Microsoft Update”

              Result:

To not see “Feature Update to Windows 10, 1703” being offered.

Result:

              “Feature Update to Windows 10, 1703” is being offered.

[Cause]

Why? 

Windows 10 1607 allows deferment of the feature update for 180 days.

You can only defer up to 180 days prior to version 1703.

You are falling under this below with the manual/adhoc scans against MU:

“Windows updates from WSUS, supplemental updates from WU - the "on-premises" scenario. Here you expect your users to perform ad hoc scans every so often to get updates that are necessary, but have not been deployed by the enterprise admins. You want quality updates, but do not want feature updates offered during these scans. The policy to disable Dual Scan was created for this scenario: you can enable the new policy, along with your deferral policies, and those deferral policies will only take effect when scanning against Windows [or Microsoft] Update.”

How can you check if DualScan is set?

              Powershell (Run As Admin)

$MUSM = New-Object -ComObject “Microsoft.Update.ServiceManager”

$MUSM.Services 

clip_image002

IsDefaultAUService

True for "Windows Server Update Service"

False for "Windows Update"

then the following could be causing it:

HKLMSoftwareMicrosoftWindowsUpdateUXSettingsDeferUpgrade=1

// Was not set.

And/or

HKLMSoftwareMicrosoftWindowsUpdateUXSettingsBranchReadinessLevel=32

// Was not set.

How long can you defer?

If you are in SAC-T (u.t.b.k.a CB), what feature update are you going to get?

If you are in SAC (u.t.b.k.a CBB), what feature update are you going to get?

1803

365 days

1903

1809

1709

365 days

1809

1803

1703

365 days

1803

1709

1607

180 days

1709

1703

1511

180 days

Not applicable, Dual Scan was not present.

Not applicable, Dual Scan was not present.

1507

180 days

Not applicable, Dual Scan was not present.

Not applicable, Dual Scan was not present.

Source:

Configure Windows Update for Business

https://docs.microsoft.com/en-us/windows/deployment/update/waas-configure-wufb

Why this could be a problem?

· If you are using a 3rd party encryption, thus, when this is deployed w/o the ‘/reflecteddrivers’ switch and the ‘compatible’ 3rd party encryption upper filter drivers, it can bricked your system.

· They have in-house apps that are not yet “Windows 10 1703” ‘compatible’, thus need additional time before you upgrade.

[Solution]

Blocking access to “Windows update” via:

    • Computer ConfigurationAdministrative TemplatesSystemInternet Communication ManagementInternet Communication settingsTurn off access to all Windows Update features

    or

      • User ConfigurationAdministrative TemplatesStart Menu and TaskbarRemove links and access to Windows Update

      Note:

        • Windows UI

          • HKLMSoftwareMicrosoftWindowsUpdateUXSettings
        • Group Policy
          • HKLMSoftwarePoliciesMicrosoftWindowsWindowsUpdate
        • MDM (CSP by SCCM 1706 WUfB policy, Intune, or other MDM Providers)
          • HKLMSoftwareMicrosoftPolicyManagercurrentdeviceUpdate

        What do you lose if you disable “Windows Updates”?

        · Update drivers (e.g. Print drivers, etc…)

        · Universal apps (Windows Store apps)

        Thanks.

        Yong

        Stop hurting yourself by: Not applying the non-security updates for Windows and Windows Server.

        $
        0
        0

        Applies to:

        Windows 8.1/Windows 2012 R2

        Windows 8/Windows 2012

        Windows 7 SP1/Windows 2008 R2 SP1

        Windows Vista/Windows 2008

        Does not apply to:

        Windows 10 1803 ((tbd))

        Windows 10 1709 (Fall Creators update)

        Windows 10 1703 (Creators update)

        Windows 10 1607 (Anniversary update) / Windows Server 2016

        Windows 10 1511 (November update)

        Windows 10 1507 (RTM)

        I was on-site this year (2018) and I had heard the following:

        "We don’t always install hotfixes; We install hotfixes if that specific problem is experienced in the environment. Security and Critical patches take precedence and, in the case of servers, are usually the only update classification we install. KBxxxxxx is entirely optional and doesn’t show up in the WSUS catalog, another reason why we never caught wind of it."

        Regarding item #1: "We install hotfixes if that specific problem is experienced in the environment".

        Answer #1:  The truth is, you probably have the issue, and just haven’t gotten to it.  It requires a lot of time investment by using advanced tools such as Sysinternals/ETL tracing (WPRUI/WPR/Xperf), WinDbg (or DebugDiag)/Message Analyzer (or Wireshark or Netmon) and other logs.  Or you are understaffed and are not able to take the time to fix the issue.

        A lot of companies just end-up rebooting the system or rebuilding the system(s).


        Regarding item #2: "Security and Critical patches take precedence and, in the case of servers, are usually the only update classification we install."

        Answer #2:  Probably the reason that your servers are not 'stable'.

        Recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters

        https://support.microsoft.com/en-us/help/2920151/recommended-hotfixes-and-updates-for-windows-server-2012-r2-based-fail

        Recommended hotfixes and updates for Windows Server 2012-based failover clusters

        https://support.microsoft.com/en-us/help/2784261/recommended-hotfixes-and-updates-for-windows-server-2012-based-failove

        Recommended hotfixes and updates for Windows Server 2008 R2 SP1 Failover Clusters

        https://support.microsoft.com/en-us/help/2545685/recommended-hotfixes-and-updates-for-windows-server-2008-r2-sp1-failov

        Recommended hotfixes for Windows Server 2008-based server clusters

        https://support.microsoft.com/en-us/help/957311/recommended-hotfixes-for-windows-server-2008-based-server-clusters

        List of currently available hotfixes for the File Services technologies in Windows Server 2012 and in Windows Server 2012 R2

        https://support.microsoft.com/en-us/help/2899011/list-of-currently-available-hotfixes-for-the-file-services-technologie

        List of Domain Controller Related Hotfixes Post RTM for Windows 8.1 and Windows Server 2012 R2 (Part 2)

        https://social.technet.microsoft.com/wiki/contents/articles/26177.list-of-domain-controller-related-hotfixes-post-rtm-for-windows-8-1-and-windows-server-2012-r2-part-2.aspx

        etc...


        Regarding item #3: KBxxxxxx is entirely optional and doesn’t show up in the WSUS catalog

        Answer #3:  Yes, and hopefully you were getting the RSS feeds regarding the newly released (non-security and security) hotfixes:

        Most recent hotfixes RSS feed.

        https://blogs.technet.microsoft.com/yongrhee/2013/06/27/most-recent-hotfixes-rss-feed/

        For example, if there was a "Service Pack 3" for Windows 7 SP1 and Windows Server 2008 R2 SP1, would you have not installed it?

        “Enterprise” Convenience Rollup Update II (2) for Windows 7 SP1 and Windows Server 2008 R2 SP1

        https://blogs.technet.microsoft.com/yongrhee/2016/05/20/enterprise-convenience-rollup-update-ii-2-for-windows-7-sp1-and-windows-server-2008-r2-sp1/

        All of that lead to:

        Further simplifying servicing models for Windows 7 and Windows 8.1

        https://blogs.technet.microsoft.com/windowsitpro/2016/08/15/further-simplifying-servicing-model-for-windows-7-and-windows-8-1/

        More on Windows 7 and Windows 8.1 servicing changes

        https://blogs.technet.microsoft.com/windowsitpro/2016/10/07/more-on-windows-7-and-windows-8-1-servicing-changes/


        Regarding item #4: But the KB article has the following statement:

        "A supported hotfix is available from Microsoft. However, this hotfix is intended to correct only the problem that is described in this article. Apply this hotfix only to systems that are experiencing this specific problem."

        Answer #4: It's a 'boiler' template.  A lot of times, the same binary has been updated multiple times.

        Let me give you a real world example.  A Premier opened a case due to their server bugchecking (a.k.a. BSOD), they got a non-security update created for them.  The company was big enough and segmented enough, that their peers opened 11 more cases with the same bugcheck and the fix was the same.  So why wouldn't you have deployed it to all the server in the environment?

        Q:  How do I roll these fixes out?

        A:  Like you would have done in the past when you were doing a “Service Pack”.  Target the IT folks first.  Then try a few of your power users in each department in your company.  Never have your C-Level executives test, unless you want to spend time working on executive escalations.  And then continued with the phased deployment.

        [Solution]

        In Windows 10 and Windows Server 2016 and newer, that is why Windows As A Service (WaaS) is there.

        You get all the "Security updates" and "Non-security update" via the cumulative rollup.

        Overview of Windows as a service

        https://docs.microsoft.com/en-us/windows/deployment/update/waas-overview

        Quick guide to Windows as a service

        https://docs.microsoft.com/en-us/windows/deployment/update/waas-quick-start

        Thanks,

        Yong “Working from home in the Museum district in Los Angeles, CA.”

        Other “Stop hurting yourself by” posts:

        Stop hurting yourself by: Disabling IPv6, why do you really do it?
        https://blogs.technet.microsoft.com/yongrhee/2018/02/28/stop-hurting-yourself-by-disabling-ipv6-why-do-you-really-do-it-2/

        WMI: Stop hurting yourself by using “for /f %%s in (‘dir /s /b *.mof *.mfl’) do mofcomp %%s”
        https://blogs.technet.microsoft.com/yongrhee/2016/06/23/wmi-stop-hurting-yourself-by-using-for-f-s-in-dir-s-b-mof-mfl-do-mofcomp-s/


        What’s Changed in MDT 8450

        $
        0
        0

        Back in December, a new build of MDT was released.  Continuing the pattern established with the release before it (8443), it’s identified only by its build number, hence it’s called MDT 8450.  As Aaron detailed in the announcement blog at https://blogs.technet.microsoft.com/msdeployment/2017/12/21/mdt-8450-now-available/, this release is primarily for compatibility with the latest ADK, Windows 10, and ConfigMgr releases, and includes fixes for a variety of bugs (with the full list included).

        Here’s a quick rundown of every change made to the MDT scripts and templates for those that are curious:

        • Templates:
        • SCCM_Client.xml.  Changed the UEFI recovery partition size from 300MB to 499MB to make sure it’s big enough (same as what MDT was already using for Lite Touch).
        • SCCM_Server.xml.  Changed the UEFI recovery partition size from 300MB to 499MB to make sure it’s big enough (same as what MDT was already using for Lite Touch).
      • Scripts:
        • DeployWiz_ProductKeyVista.vbs.  Fixed logic that caused an “invalid deployment type” error.
        • LiteTouch.wsf.  Fixed a variety of logic related to LTI Windows 10 upgrade task sequences (which also fixed an issue related to autologon after a reboot prior to the upgrade).
        • LTIApply.wsf.  Changed the BCDBOOT logic to always run it the same way for UEFI devices, regardless of the OS version, to address some boot-loop issues on bare metal UEFI deployments on some devices.
        • LTICleanup.wsf.  Inconsequential changes (line alignment).
        • ServerManager.xml.  Fixed component names that prevented the installation of Windows Media and IIS Management Console features on Windows Server 2016.
        • SetupComplete.cmd.  Fixed logic related to the changes in LiteTouch.wsf for Windows 10 upgrade task sequences.
        • SetupRollback.cmd.  Ditto.
        • ZTIBde.wsf.  Changed the BitLocker pre-provisioning logic to not try to do anything with the TPM while in Windows PE, to avoid putting the TPM into a reduced functionality state.
        • ZTIGather.wsf.  Added some new chassis types (30, 31, 32 for laptops; 35 and 36 for desktops; 28 for servers).
        • ZTIMoveStateStore.wsf.  Fixed the logic that moved the state store so that it didn’t use a hard-coded StateStore folder location.
        • ZTIOSRole.wsf.  Fixed the logic so that it works for multiple calls to get the source location.
        • ZTIUtility.vbs.  Fixed logic to ignore disabled “Install Operating System” steps (caused problems with some types of task sequences).

        Additionally, all the standalone task sequencer binaries (used to run LTI task sequences) were updated to the latest code from ConfigMgr.

        So, it’s a very minor update overall.  If you have existing task sequences created with MDT 8443, you shouldn’t need to recreate them, although if you have ConfigMgr task sequences you might want to edit the “Format and Partition” UEFI steps to specify 499MB instead of 300MB for the recovery partition size.

        And as always, back up your deployment share before upgrading (especially if you’ve made any script edits), reintegrate your changes if needed, and make sure you update your boot images (including on WDS, USB boot media, boot ISOs, etc.) as mismatched versions will cause all sorts of problems.

        How will Certificate Transparency affect existing Active Directory Certificate Services environments?

        $
        0
        0

        Wes Hammond here from Premier Field Engineering.  It has been a while since I posted anything, but I wanted to step back into the spotlight to talk a little bit about something a few customers have been asking about lately.  How will Certificate Transparency affect their Active Directory Certificate Services environments?  Well, here are your answers…

         

        Before we get started, here is a little bit of information about Certificate Transparency that is relevant to this article.  CT is being applied to certificate authorities that chain to a Public/Commercial Root Authority to detect fraudulent certificates used for HTTPS purposes.  Many public certificate authorities have already been reporting to the CT logging servers for some time now.  How it works is beyond the scope of this document and I would recommend you read the information located at the site linked to at the bottom of this article.

         

        CT in Browsers

        Google is scheduled to enforce CT in Chrome browsers on April 30th 2018 for certificates issued after April 1st 2018.

         

        CT in Private PKI (CA's that DO NOT chain to a public Root)

        I am going to start with the most common scenario.  Most of you have a private PKI within your organization that does not chain up to a public root.  In this scenario, CT will not affect your CA's.  Chrome browser uses Windows native CAPI to determine trusted chains.  Windows can differentiate between commercial/public CA chains and internal/private chains.  Since Windows has this ability, CT will not affect Private/Internal PKI chains.

         

        CT in Certificate Chains that DO chain to public Root

        "IF" your certificate authority chains up to a public root and you issue SSL/TLS/HTTPS certificates, CT may affect your PKI.  How it affects you is beyond the scope of this article, and I would recommend you consult your provider for more information.

         

        Other Certificate Purposes

        As I mentioned earlier, CT is only relevant to certificates used for HTTPS.  All other certificate purposes such as smartcard logon, code signing, document signing, SMIME, any many others are not visible through Chrome browsers and thus are not affected, so rest easy 🙂

         

        For more information on Certificate Transparency see the official site on it here: https://www.certificate-transparency.org/

        If you liked this blog please don't forget to rate it.

        WSUS Catalog import failures

        $
        0
        0

        Windows Server 2016 üzerinden çalışan WSUS sunucunuza Windows Update Catalog üzerinden bir update import etmek istediğinizde aşağıdaki hata mesajını alabilirsiniz.

        "This update cannot be imported into Windows Server Update Service because it is not compatible with your version of WSUS"

        WSUS konsolunda "Import Update" bağlantısına tıkladığınızda Internet Explorer penceresi açılarak sizi aşağıdaki gibi bağlantıya yönlendirecektir.
        http://catalog.update.microsoft.com/... &Protocol=1.20
        Yapmanız gereken yukarıdaki adresin sonundaki versiyonu aşağıdaki gibi değiştirmek
        http://catalog.update.microsoft.com/... &Protocol=1.8

        Bu durum bilinen bir sorun ve gerekli fix çıkana kadar bu çözümü kullanabilirsiniz.

        What’s new for US partners the week of March 12

        $
        0
        0

        Find resources that help you build and sustain a profitable cloud business, connect with customers and prospects, and differentiate your business. Read previous issues of the newsletter and get real-time updates about partner-related news and information on our US Partner Community Twitter channel.

        Subscribe to receive posts from this blog in your email inbox or as an RSS feed.

        Looking for partner training courses, community calls, and information about technical certifications? Read our MPN 101 blog post that details your resources, and refer to the Hot Sheet training schedule for a six-week outlook that’s updated regularly as we learn about new offerings. To stay in touch with us and connect with other partners and Microsoft sales, marketing, and product experts, join our US Partner Community on Yammer.

        Top stories

        New posts on the US Partner Community blog

        New on demand videos

        MPN news

        Partner webinars available this winter

        Learning news

        Upcoming events

        US Partner Community partner call schedule

        Community calls and a regularly updated, comprehensive schedule of partner training courses are listed on the Hot Sheet

        System Center 1801 Operations Manager – Enhanced log file monitoring for Linux Servers

        $
        0
        0

        System Center Operations Manager 1801 has enhanced log file monitoring capabilities for Linux Servers.

        • Operations Manager now supports Fluentd, an Open source Data collector.
        • Customers can also leverage Fluentd capabilities and plugins published by the Fluentd community to get enhanced customizable log file monitoring.
        • The existing OMI based monitoring for currently supported Linux workloads will continue to work as it is today. 

        With this release we have added support for the following log file monitoring capabilities

        • Support for wildcard characters in log file name and path.
        • Support for new match patterns for customizable log search like simple match, exclusive match, correlated match, repeated correlation and exclusive correlation. We have released 6 new filter plugins for customizable log search.
        • Support for generic Fluentd plugins published by the fluentd community. System Center Operations Manager 1801 would include a convertor plugin which would convert the fluentd data from generic plugins to the format specific for SCOM log file monitoring.

        Architecture

        Below are few architectural changes in the SCOM Management server and the SCOM Linux agent to support Fluentd.

        The new Linux SCOM agent would include a Fluentd agent (as shown in the above picture (1)).

        Users would define the log file names, match pattern and the event to be generated on pattern match along with the event description in the Fluentd Configuration file.

        On match of a log record, Fluentd would send the event to the System Center Operations Manager External Datasource service on the SCOM Management Server / Gateway (2).This is a Windows REST based service which would receive the event and send it to a dedicated custom Event log channel Microsoft.Linux.OMED.EventDataSource (3).

        User would need to import a management pack (4) which would look for events in this custom event channel and generate alerts accordingly

        User Workflow:

        On Linux Server:

        On SCOM Management Server:

        User needs to follow the below steps on the Management Server 

         

        Step 1:

        User would need to import the latest Linux Management pack (shipped with the SCOM 1801 binaries) and install the new SCOM agent on the Linux Servers.

        Users can install the agent either manually or through discovery wizard (recommended). For detailed steps, refer here.

        Step 2:

        Author Fluentd configuration file and place it on the Linux Servers

        Customers need to author a Fluentd configuration file and can use any of the existing enterprise tools like Chef/Puppet to place the configuration file to the Linux server.

        Recommended practice is to copy the configuration into /etc/opt/microsoft/omsagent/scom/conf/omsagent.d directory on all Linux servers and include the configuration file directory as @include directive in the master configuration file /etc/opt/microsoft/omsagent/scom/conf/omsagent.conf

        The Fluentd configuration file is where the user should define the input, output and the behavior (match processing) of Fluentd. This is done by defining the following in the configuration file:

        Source directive:

        Fluentd’s input sources are defined in the source directive using desired input plugins. Users would need to define the log file names along with the file path here in this directive. Wild card characters are support both in file name and path.

        Filter directive:

        Filter directive is the chained processing pipeline. Users would need to define the match pattern and the events that are to be generated on a match here in this section. We have released the following filter plugins with this release

        • filter_scom_simple_match,
        • filter_scom_excl_match
        • filter_scom_cor_match
        • filter_scom_repeated_cor
        • filter_scom_excl_correlation
        • filter_scom_converter

        Match directive:

        Users define the output processing in Match directive. We have released “out_scom” match plugin which would send the events generated by Fluentd to the System Center Operations Manager External Datasource service on the SCOM Management Server/Gateway.

        For more detailed instructions on how to author a Fluentd configuration file, refer here.

        Step 3:

        On SCOM Management server: Import Management pack and enable OMED Service

        On Management Server User needs to do the following:

        1)      Start OMED service (refer here).

        2)      Import Management pack for log file monitoring.

        User can import the sample Management pack (reference here ), save this as an xml file and import it in SCOM console. This Management pack has a rule that looks for all events from the new data source Microsoft.Linux.OMED.EventDataSource and generates alerts accordingly. The Alert severity and priority are set in the management pack. The Alert description is obtained from the event description which would be defined by the user in the Fluentd configuration file.

        If users are interested to generate alerts only for specific events generated, they could author their own custom management pack using VSAE.

        Example Scenario:

        User would like to monitor the following scenarios

        1)      Apache http server URL monitoring

        Scenario: Monitor a web URL hosted on Apache http server and generate alerts on SCOM Management server if the URL has any issues.

        Log to be monitored: User monitors Apache http server access.log for error code. If the log receives any code other than 200 (success code) an event will be sent to SCOM Management Server.

        2)      Authentication failure

        Scenario: If a user tries to access a server more than 5 times with an incorrect password, an alert would be sent to the SCOM server alerting an unauthorized user trying to intrude.

        Log to be monitored: User monitors Linux Server auth.log for authentication failure error messages. If the messages exceeds 5 times in 10 seconds and event will be sent to SCOM Management server.

        Sample Configuration File:

        The OMEDService on SCOM Management server would receive an event on match of a log record along with the log record context. User would need to import a management pack on SCOM server which would generate alert when there is an event received from Linux Server.

        Events on the SCOM Management Server:

         Generated Alert on the Management Server:

        The Alert context will contain the log record which will have more details on the error code received while trying to access the URL.

        Other Sample User Scenarios:

        For more detailed steps look at the online documentation.

        Feedback:

        We’d love to hear your feedback on this new feature. Feel free to send your feedback to scxtech@microsoft.com.

        SQL Data Discovery & Classification in SQL Server Management Studio (SSMS) 17.5

        $
        0
        0

        In the latest version of SQL Server Management Studio (SSMS) 17.5, the new SQL Data Discovery & Classification feature was added with very little fanfair. I urge any one having to deal with General Data Protection Regulation (GDPR) or any data claffication issues to look at.

        So lets get started: (There is a video below)

        1. Download the latest verison of SSMS (17.5 or later) from here and install in.
        2. Connect to your instance:  SQL Server 2008 and higher,  for Azure SQL Database, see Azure SQL Database Data Discovery & Classification
        3. Right click on your database, select tasks and pick, Classify Data.
        4. Select all/some or none the recommendations (you can also change the Information Type and Sensitivty labels) at this time.
        5. Click Save.  Its that simple!

         

        You can change the Information Type and Sensitivty labels to the values form the drop down lists.  There is only a limited range of options, but there is a plan to allow users to customise the Information Types and Sensitivity Labels and well as the classification function in the future.

         

         

        You can also classify data that the Classify Data function has missed.

         

        Now finally, we need a report.

        Just click on the 'View Report' button and we get a view of the classified fields in the database.

         

        This looks like a really promising start to some really useful functionality coming in future releases of SSMS and SQL Server.

        Microsoft have more information on GDPR here.

        SPO Tidbit – New features to support SharePoint Framework

        $
        0
        0

        Hello All,

        Wanted to bring to your attention the release of further support for the SharePoint Framework within SPO and O365.

        More support for using Graph API and 3rd Party API’s in the SharePoint via the permission feature as outlined here.

        As well you can read about the new Graph API here.

        Pax


        Governance v Azure: katalog služeb vašeho centrálního IT

        $
        0
        0

        Má vaše IT katalog služeb, které nabízíte obchodním jednotkám či jiným týmům? Spravujete pro ně nějakou aplikaci či prostředí? Současně jim ale chcete dát možnost automatického nasazení, aniž by se vás museli ptát? A také zajistit, že náklady na infrastrukturní zdroje půjdou za nimi? Použijte servisní katalog v Azure – vámi navržená a spravovaná řešení, která vaši kolegové najdou jednoduše v portálu k vytvoření.

        Proč servisní katalog a proč je jiný, než marketplace

        Jako ideální příklad použití pro servisní katalog vidím právě situaci popsanou v úvodu. IT chce nabídnout nějaké standardizované řešení ostatním částem organizace privátním způsobem. Současně se můžete rozhodnout, zda toto řešení bude na straně příjemce startovací šablona a ve vytvořených zdrojích se mohou libovolně hrabat nebo zda preferujete variantu, kdy na vytvořené zdroje mají pouze čtecí práva a vy se jim o ně staráte přestože běží v jejich subscription, do které třeba normálně přístup nemáte.

        Druhá situace může být totéž ale s tím, že místo centrálního IT tuto službu nabízí váš dodavatel či partner. Vytvoří pro vás šablonu častěji opakovaného spravovaného řešení a vy sami si ji můžete nasadit a zrušit kolikrát chcete. Stále ovšem jde o privátní situaci, tedy položka katalogu je jen pro vás.

        Stejný mechanismus lze použít i v Marketplace. Servisní katalog je privátní záležitost, Marketplace je naopak určen „široké veřejnosti“. Není tedy vhodný pro interní záležitosti, spíše pro aplikační firmy, které chtějí svůj software nabídnout na kliknutí všem zákazníkům Azure (je nutné splnit určité podmínky a být v Microsoft Developer programu).

        Co v tom může být a jak to funguje

        Samotné zdroje se řeší formou ARM šablony, takže cokoli co lze šablonou definovat, může být součástí této položky v katalogu. Infrastrukturní věci, platformní služby a tak podobně. Může to být jedno VM, celý kompexní cluster VM nebo PaaS infrastruktura s Web App a Azure SQL DB například. Tato ARM šablona je to, co se příjemci vytvoří v jeho subscription, když si to objedná.

        Druhou součástkou je definice GUI. Při startu z portálu víte, že všechna řešení mají nějakého průvodce, který se ptá na důležité parametry. Toto GUI máte pod svou kontrolou a můžete se zeptat na co chcete. Posbírané výsledky můžete předat ARM šabloně a tímto jí parametrizovat. V ukázce vám popíšu jak to udělat, aby to měl uživatel co nejjednodušší. Tedy aby si nevybíral složité věci, kterým nemusí rozumět, ale spíše nějaké zjednodušené varianty. Nejčastěji „velikost“ aplikace – Small, Medium, Large. Za touto jednoduchu volbou schováte technické detaily vašeho doporučeného sizingu, třeba velikosti VM, velikosti a typy disků, SKU Azure SQL DB atd. Stejně tak můžete využít kondicionály v ARM a dát možnost jednoduše zvolit, zda chci vysokou dostupnost nebo ne (a podle toho udělám jednu instanci nebo nějaký balancovaný cluster).

        Třetí komponentou je nastavení práv. Prvním je nastavení zámečku, tedy zda má mít operátor k vytvořeným zdrojům přístup nebo ne. Pokud to chcete koncipovat jako startovací šablonu (a ať si to pak rozvrtá jak chce), zámeček nedávejte. Pokud to má být vámi spravovaná služba, zámeček dejte a uživateli neříkejte ani administrátorský login do VM či DB. S tím souvisí druhá věc – jste schopni u těchto zdrojů přiřadit práva (RBAC) pro vámi definovaný účet či AAD skupinu. Jinak řečeno centrálnímu IT týmu se automaticky vytvoří práva v roli, kterou definujete, takže může se zdroji patřičně zacházet a starat se o prostředí.

        Vyzkoušejme si to

        Celou ukázku mám zde: https://github.com/tkubica12/azure-managed-app

        Nejprve mrkněte na ARM šablonu s názvem mainTemplate.json. Je to jednoduchá šablonka, která vygeneruje infrastrukturu s jednou VM a veřejným endpointem (výslednou URL mimochodem vrací jako output, který pak uživatel uvidí v portálu). Vaší pozornosti doporučuji jak se implementuje ono zjednodušení sizingu na varianty Small, Medium a Large.

        Dále se podívejte na createUiDefinition.json. To je definice GUI, ve které chci odsouhlasení s tím, že to budu spravovat já a následně se ptám na některé parametry, konkrétně velikost řešení a doménové jméno.

        Oba soubory zabalíme do zipu a na ten se odkážeme při definici této položky v katalogu.

        Pokračovat ve čtení

        Azure DDoS Protection 服務預覽

        $
        0
        0

        這篇文章由  JR Mayberry, Principal PM Manager & Anupam Vij, Senior Program Manager, Azure Networking. 共同編撰。

         

        客戶將其應用程序遷移到雲時,分佈式拒絕服務(DDoS)攻擊是最大的可用性和安全問題之一。根據 Nexusguard 的數據2016 年第一季度記錄的 DDoS 攻擊數量比 2016 年第一季度增長了 380%,這些擔憂是合理的。 2016 10月,一些受歡迎的網站受到由多次拒絕服務攻擊組成的大規模網絡攻擊的影響。據估計,所有互聯網宕機事件中有三分之一與 DDoS 攻擊有關。

        隨著網絡攻擊的類型和複雜程度的提高,Azure 致力於為我們的客戶提持續保護Azure上的應用程序安全性和可用性的解決方案。雲中的安全性和可用性是共同的責任。 Azure為客戶提供平台級功能和設計最佳實踐,以便採用並應用到滿足其業務目標的應用程序設計中。

         

        今天,我們很高興地宣布 Azure DDoS Protection Standard 預覽。該服務與虛擬網路集成,並為受DosS攻擊影響的 Azure 應用程式提供保護。它可以在 Azure 平台自動包含的基本 DDoS 保護以外,實現其他特定應用的調整、警報和遙測功能。

        Azure DDoS Protection Service offerings

        1

        Azure DDoS Protection Basic service

        Protection Basic(基本保護)已經被默認整合到Azure平台中,無需額外成本。 Azure 的全球部署網路有著大規模高流量,通過始終保持流量監控以及實時緩解,來抵禦常見的網絡層攻擊。 無需經過使用者配置或應用程式的更改即可啟用DDoS Protection Basic

        2

        Azure DDoS Protection 標準服務 

        Azure DDoS Protection Standard(保護標準)是一種新的產品,可提供額外的DDoS 緩解功能,並自動調整以保護您的特定 Azure 資源。 保護很容易在任何新的或現有的虛擬網絡上啟用,並且不需要應用程式或資源更改。標準利用專用監控和機器學習來配置調整到虛擬網絡的DDoS防護策略。 通過分析應用程式的正常流量模式,智能地檢測惡意流量,並在檢測到後立即減緩攻擊,即可實現此額外保護。 DDoS 保護標准通過 Azure Monitor 提供了攻擊遙測視圖,可在應用受到攻擊時啟動警報。 Application Gateway WAF可以提供集成的第7層應用程序保護。

        23

        Azure DDoS Protection Standard service features

        平台整合

        Azure DDoS Protection 本身就被整合到 Azure 中了,並且當您在虛擬網路(VNet)上啟用它時也包含了通過 Azure Portal 和 PowerShell 的配置。

        Turn Key (交鑰匙) Protection

        簡化配置能立即保護虛擬網路中的所有資源,而無需進行其他應用程式的更改。

        4

         

        始終在監控

        啟用 DDoS 保護後,您的應用程式流量模式會持續受到攻擊指標的監控。

        自適應調整

         DDoS 防護了解您的資源和資源配置,並將 DDoS 防護策略自定義為您的虛擬網路。 隨著時間的推移,機器學習算法會隨著流量模式的變化而設置和調整保護策略 保護策略定義了保護限制,並且在實際網絡流量超過策略閾值時執行緩解。

        5

        使用應用程式網關進行 L3  L7 保護

        Azure DDoS Protection 服務與 Application Gateway Web 應用程式防火牆相結合,為常見 Web 漏洞和攻擊提供 DDoS 防護。

        • 請求速率限制
        • HTTP 協議違例
        • HTTP 協議異常
        • SQL 注入
        • 跨站腳本

        6

         

        DDoS保護遙測,監測和警吿

        通過 Azure Monitor 提供豐富的遙測技術,包括在 DDoS 攻擊期間的詳細指標。 警報可以針對由 DDoS 保護公開的任何 Azure Monitor 度量標准進行配置。 日誌記錄可以進一步與 SplunkAzure事件中心),OMS 日誌分析和 Azure 存儲集成,以便通過Azure監視器診斷界面進行高級分析。

         

        7

        Cost Protection 成本保護

        DDoS 保護服務轉入 GA 後,成本保護將提供資源信用,用於在記錄的攻擊期間向外擴展。

        Azure DDoS 防護標準服務可用性

        Azure DDoS Protection 現已在美國、歐洲和亞洲的特定地區預覽。 有關詳細信息,請參閱DDoS 保護

        如何開始?

        DDoS 保護處於預覽狀態,預覽期間服務不需要任何費用。 Azure 客戶可以在這裡註冊 Azure DDoS Protection 服務

        想了解有關該服務的更多信息,請參閱 Azure DDoS Protection 服務文檔

          

        Azure Security Center 將高級威脅防護擴展到混合雲上的工作負載

        $
        0
        0

        原文撰 /  Principal Program Manager, Azure Cybersecurity

        Azure Security Center (安全中心) 可幫助您保護在 Azure 中運行的工作負載,使其免受網絡威脅,而它現在也可以用來保護在本地和其他雲中運行的工作負載了。在越來越分散的基礎架構中,管理安全性變的非常複雜,可能會造成攻擊者利用的漏洞。 而 Security Center 通過統一整個環境中的安全管理,並使用分析以及 Microsoft Intelligent Security Graph (智能安全圖) 提供智能威脅保護來降低這種複雜性。

        從較簡化的管理,到新的阻斷與檢測威脅方法,Security Center 不斷地進行創新,以幫助您解決當今面臨的安全挑戰。 Microsoft Ignite 宣布的新功能包括:

        1

        • 企業範圍的安全策略:利用 Azure Policy,現在在有限的預覽中,可以使用管理組跨多個訂閱應用安全中心策略。 這將大大簡化具有企業協議和許多 Azure 訂閱的客戶的策略管理,有助於確保安全策略始終適用於所有 Azure 工作負載。 還可以將策略應用於在本地和其他雲中運行的工作負載,以實現簡單的集中管理。
        • 自適應應用程序控制:安全中心自適應應用程式控制現在處於有限預覽狀態,通過應用適合您的特定工作負載並以機器學習為動力的白名單規則,幫助阻止惡意軟件和其他有害或潛在易受攻擊的應用程序。 通過分析Azure虛擬機上運行的應用程序(目前僅限於Windows),安全中心可以推薦並應用針對特定虛擬機或一組虛擬機定制的一組應用程序白名單規則,從而提高白名單的準確性,同時降低管理複雜性。
        • 針對Windows和Linux的高級威脅檢測:增強現有的威脅檢測功能後,安全中心很快將推出由Windows Defender Advanced Threat Protection(ATP)提供支持的檢測。 為 Windows端點構建的高級後檢測漏洞檢測將擴展到 Windows 服務器,並在安全中心提供。新的檢測結果將包含在安全中心標準中,並在您的機載資源時自動啟用。預覽將在年底之前提供。 另外,安全中心已經發布了一個有限的預覽,它利用審計記錄這一常見的審計框架來檢測 Linux 機器上的惡意行為。
        • 警報和事件調查:安全中心現在在預覽中添加了新的視覺互動調查體驗,可幫助您快速分類警報,評估違規範圍並確定根本原因。探索警報,計算機和用戶之間的顯著鏈接,指出他們已連接到攻擊活動。使用預定義或臨時查詢來更深入地檢查安全和操作事件。

        2

        • 自動化和編排:安全中心現在與 Azure Logic 應用程序集成,以自動化和編排安全劇本。使用安全中心連接器創建新的 Logic Apps 工作流程,並從安全中心警報觸發事件響應操作。根據警報詳細信息包含條件操作,以根據警報類型或其他因素調整工作流程。自動化常用工作流程,例如將警報路由至票務系統,收集更多數據以幫助進行調查,並採取糾正措施修復威脅。
        • 安全數據分析:安全中心新的集成搜索和事件監控功能使您能夠輕鬆分析來自各種來源的安全數據,包括安全中心收集的數據以及連接的解決方案(如網絡防火牆和 Azure Active Directory 信息保護)。 定義值得注意的事件來追踪,使用您定義的查詢定義潛在惡意活動的自定義警報。新的威脅情報地圖提供了洞察攻擊的地理來源的信息,而身份和訪問儀表板則包含有關可發現潛在威脅的登錄活動的數據。
        • 擴展的安全評估:為了幫助您識別可能存在風險的 Web 服務器,Security Center 現在檢查Windows VM 和服務器上的 .NET、NET 和 IIS 配置以識別漏洞。在預覽期間,問題將特別被顯示為注意事件。

        隨著威脅環境變得越來越具有挑戰性,Azure Security Center 團隊正在努力為您提供所需的解決方案以跟上步調。 有關這些新功能的更多信息,請閱讀文檔或打開 Security Center 以立即開始使用它們。

        【3月開催】 MPNパートナー様向け~月例ウェビナーによる情報提供のお知らせ【3/13 更新】

        $
        0
        0

        マイクロソフト パートナー ネットワーク チームでは、パートナー様を対象として月例のウェビナーを開催し、製品やプログラムについての最新情報や注意点などをいち早くお届けできるように努めています。ウェビナーは毎回 1 時間程度、1 トピック 10 分から 15 分で完結する内容となっています。

        3月の開催日程は322日(木)13:3014:30です。

        パートナー様でご参加にご興味があります方は、以下よりご登録ください。

         

        ▼ご登録はこちらから

         

        過去のトピックの例

        • Microsoft Azure、Office 365、Windows 10、Surface などの製品についての最新情報、注意点など
        • 働き方改革ムーブメントなどの施策、キャンペーンのご紹介
        • 各種イベントのご案内、スポンサー プログラム紹介
        • mstep トレーニング最新情報 他

         

        いままで提供された情報の例

        月例ウェビナーで過去に紹介されたトピックの例です。ウェビナーでは他にも様々なトピックが取り上げられ、製品やプログラムについての最新情報をいち早く手に入れることができます。

        (録画を部分的に切り出しています)

         

         

         

         

         

         

         

        2018 微軟全球教育交流會 –新加坡直播

        $
        0
        0

        微軟每年的全球教育交流會提供一個平台,聚集全世界的教師,探索產業趨勢 - 內容涵蓋如何支援教學、能力發展、現代教學法、安全的學校環境和預測分析新興技術,並識別需要協助的學生。這是一個超棒的活動,為教育領導者説明如何轉型今天的教學系統於學校中。

        跟隨我們的旅程到今年的教育交流盛會,我們在 Facebook 上將舉辦持續兩天的直播活動,您可密切關注 Twitter 或 Facebook 上的 @MicrosoftEDU#Road2E2

        Facebook 直播連結

        星期三,3 月 14 日

        開幕演講   9:00 - 上午 11:00 (新加坡時間,等同台灣時間):

        • 歡迎 Chan Lee Mun,前新加坡南洋理工學院校長
        • "社會情緒學習與科技" Molly Zielezinski
        • micro:bit 中的 Makecode 與混合實境 Adrian Lim Monfort 初中的教師和學生以及斯坦福美國國際學校主任。
        • 融合教育 (Inclusive Learning) 與 "Hack 閱讀障礙" Aggeliki Pappa
        • 結語 - Anthony Salcito 微軟全球教育總經理

        星期四,3 月 15 日

        教育新潮流!   早上8 點,新加坡時間

        所有微軟全球教育交流會的新亮點、第一手消息都會這一場直播中與全世界分享。我們還會強調更多微軟教育家社群 (Microsoft Educator Community) 資源、以及微軟創新教育家計畫。

        閉幕   早上 8:45 - 10:00 新加坡時間

        • 歡迎 Eva Psalti
        • "Azure 機器學習" Liam Ellul
        • "微軟教育家社群 - 教師的免費專業發展社群 Sarah Morgan
        • 認證協助學生成功 Heather Daniel
        • "MakeCode 程式學習課程" Douglas Kiang
        • "聽聽 Skype 在課堂上如何能改變你的學生的生活" Emma Nääs
        微軟創新教師在頒獎典禮在加拿大多倫多 (2017 年 3 月)。

        加入我們,成為微軟創新教師

        我們邀請所有教育工作者加入 Microsoft 教育家社群 - 在那裡您將發現所有專業發展的課程,引導您成為認證的 Microsoft 創新教育家 (MIE)。在成為經過認證的 microsoft 創新教育家 (MIE) 之後,您可以繼續向邁進,成為微軟創新菁英教師 (MIEE)。提名將于 3 月開放。

        了解更多 》

        Viewing all 36188 articles
        Browse latest View live


        <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>