Quantcast
Channel: TechNet Blogs
Viewing all 36188 articles
Browse latest View live

ツール: Microsoft Teams 認知度向上キャンペーン

$
0
0

(この記事は 2017 10 27 日に Matt Soseman's "The Productive Cloud" Blog に投稿された記事 Tools: Microsoft Teams Awareness Campaign の翻訳です。最新情報については、翻訳元の記事をご参照ください。)

以前の記事 (英語) では、Microsoft Teams の導入時に活用できる www.successwithteams.com などの便利なリソースをご紹介しました。プランを立てることで導入を成功させ、口コミ効果で Teams の利用拡大につなげることができます。成功プランの一環として、展開を開始する前には認知度向上キャンペーンを (必要に応じて複数回) 実施する必要があります。これは、社内で Microsoft Teams が導入されることをユーザーに周知させ、詳しい情報や準備のための資料を提供することを目的としています。展開中や展開後にもキャンペーンを行うことで、利用の促進や拡大が見込めます。

今回は、このキャンペーンを強力にサポートする Microsoft Teams Customer Success Kit (英語) という重要なツールをご紹介します。このキットには、メール、ポスター、クイック リファレンス カード、テンプレートなどが含まれており、ロゴを自社のものに変更したり、内容をカスタマイズしたりして使用できるため、キャンペーン用の資料をゼロから作成する必要がなく、とても効率的です。対象となるのは、IT プロフェッショナル、管理者、トレーニング担当者、一般ユーザーです。このキットは、組織のニーズに応じて自由にアレンジすることができます。スムーズな展開に役立つツールについて理解を深めていただくために、その内容をご紹介します。

キットには以下のものが含まれています。

  • 告知用テンプレート
  • カウントダウン用テンプレート
  • チラシ用テンプレート
  • IT 管理者向け入門ガイド
  • チーム リーダー向け入門ガイド
  • ポスター用テンプレート
  • ヒントとテクニック メール用テンプレート

それでは、各テンプレートの内容を見てみましょう。

Microsoft Teams カウントダウン用テンプレート

このテンプレートは、メール (推奨)、または休憩室や会議室のポスターに使用できます。近日社内に Microsoft Teams が導入されることを告知し、チームワークを実現するためのハブ、会議機能、Microsoft Teams の概要について紹介するものです。また、Microsoft Teams を詳しく紹介する 2 分間の YouTube 動画へのリンクが含まれています (アクセスを集めて口コミ効果を高めるために、http://short.url/WhatisTeam のように、リンクを短縮してドキュメントに掲載することをお勧めします)。

このテンプレートには、社内リソースの短縮リンクや、質問やフィードバックのためのメール エイリアス (Yammer グループならなお良い) を追加できます。



Microsoft Teams 告知用テンプレート

このテンプレートは、メール (推奨)、または休憩室や会議室のポスターに使用できます。近日社内に Microsoft Teams が導入されることを告知し、Microsoft Teams の概要、エンド ユーザーのメリット、Office 365 トレーニング センターのリソースなどへのクイック リンクをわかりやすく紹介するものです。このテンプレートには、社内リソースの短縮リンクや、質問やフィードバックのためのメール エイリアス (Yammer グループならなお良い) を追加できます。

Microsoft Teams チラシ用テンプレート

このテンプレートは、休憩室や会議室のポスター、食堂の卓上ポップなどに使用できます。事前告知に限らず、展開中や展開後も Microsoft Teams の認知度を高めるために活用でき、Microsoft Teams の概要、エンド ユーザーのメリット、Office 365 トレーニング センターのリソースなどのクイック リンク、他のユーザーと交流できるマイクロソフトのテクニカル コミュニティ、社内のイベントなどの情報が見やすくまとめられています。このテンプレートはカスタマイズ可能ですが、必要な情報が揃っているため、基本的にはカスタマイズ不要です。


Microsoft Teams ポスター用テンプレート

このテンプレートは、休憩室や会議室のポスター、食堂の卓上ポップなどに使用できます。事前告知に限らず、展開中や展開後も Microsoft Teams の認知度を高めるために活用でき、Microsoft Teams の概要説明と、テンプレートの下部にリソースへのリンクが掲載されています。


Microsoft Teams ヒントとテクニック メール用テンプレート 1

これは、Microsoft Teams の展開後にヒントやテクニックを紹介するメール (またはその他の告知手段) 用テンプレートです。チャネル、会話、会議などの具体的な機能説明のほか、テンプレート下部にリソースへのリンクが掲載されています。




Microsoft Teams ヒントとテクニック メール用テンプレート 2

こちらは、展開中や展開後に使用するメール用テンプレートで、ファイルの共同作業、タブの作成、GIF、アクティビティ ログなど、Microsoft Teams のさまざまな機能を紹介しています。中でも、タブは特に優れており、個人的には Teams の真の価値や可能性がよく表れている機能だと思います。このテンプレートもカスタマイズ可能です。




Microsoft Teams 入門ガイド (IT 管理者向け)

次に興味深いドキュメントをご紹介します。これは、IT 管理者が Microsoft Teams を理解し、社内で管理および展開する際に活用できる全 18 ページのクイック リファレンスです。内容は以下のとおりです。

  1. Microsoft Teams を理解する
    1. 主なメリット
    2. 差別化要因
    3. 定義 (組織、チーム、チャネルなど)
    4. チームのメンバーシップ
    5. チームの役割
  2. Microsoft Teams を展開する
    1. 有効化する方法
    2. テナント レベルの設定
      1. チームとチャネル
      2. 通話と会議
      3. メッセージング
      4. タブ
      5. ボット
      6. コネクタ
      7. クライアントの配布
      8. ライセンス
  3. パイロット チームを開始する
    1. 計画
      1. パイロット チームを決定する
      2. チーム チャンピオンを決定する
      3. パイロットをセットアップして成功させる
    2. 開始
      1. チーム チャンピオンとのキックオフ ミーティングを開催する
      2. チーム チャンピオンを Microsoft Teams に招待する
      3. チーム チャンピオンによるパイロットのステージングと開始を支援する
    3. 育成
      1. 業務を Microsoft Teams に移行する
      2. Microsoft Teams を統合する
      3. Microsoft Teams を監視する
  4. 組織全体への展開を開始する
    1. 認知度を高める
    2. トレーニングを実施する
    3. イノベーションを実現する


Microsoft Teams チーム リーダー向け入門ガイド

最後にとっておきのドキュメントをご紹介します。これは Microsoft Teams の活用に役立つ情報が満載のガイドです。このドキュメントは、中心となってチームの目的を定義するチーム リーダーを対象としています。内容は以下のとおりです。

  1. Microsoft Teams を理解する
    1. 主なメリット
    2. 展開
    3. 構成 (組織、チーム、チャネル)
    4. チームのメンバーシップ
    5. チームの役割
    6. チームの設定
  2. チームを作成する
    1. チームの目的を定義する
    2. チーム環境をステージングする
  3. メンバーを招待する
    1. チーム ミーティングを開催する
      1. Microsoft Teams を選ぶ理由
      2. 概要とデモ
      3. 使用範囲
      4. ルール
      5. 規則
      6. セットアップ
      7. チームをフォローアップする
      8. トレーニングとヘルプを提供する
  4. メンバーの参加を促す
    1. 業務を Microsoft Teams に移行する
    2. コネクタとタブを使用する
    3. フィードバックやアイデアを送信する
  5. Microsoft Teams に IT 組織をセットアップする

※ 本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

 


Word 新增文字 –目錄製作和內文搜尋,一次輕鬆搞定!

$
0
0

Word 文件篇幅超長時,每次找內文是不是感到眼花撩亂?而每當目錄製作完成後,只要內文一經修改,標題或頁數又要再一一修正,是否感到很費時呢? 別擔心,將文字新增為標題後,透過「功能窗格」只要點擊章節標題,即可立刻跑到該頁面,不需再像以往檢視內容時需要拖曳旁邊的軸,清楚又省時!此外,目錄也可以輕鬆製作完成,快來一探究竟!

▲點擊「參考資料」、「新增文字」將文字新增為標題

▲點擊「檢視」進入「功能窗格」,可輕鬆找尋方才新增的標題

▲點擊「參考資料」、「目錄」,目錄將根據之前設定的標題自動完成

▲若想要修改目錄標題或頁碼,只須點擊目錄上方的「更新目錄」即輕鬆完成

 

原來將文字新增為標題後,可以一次運用兩項這麼方便的功能,趕快操作看看吧!

[無料ダウンロード] ビジネスを成長させるために、最新のカスタマーサービス戦略を学びましょう(e-Book) 【1/12更新】

$
0
0

未来のカスタマーサービスに向けて準備を整えましょう。

価格や製品を簡単に比較できる現代において、カスタマーエクスペリエンスはブランド差別化の主要な要因となります。顧客の期待、認知、好み、傾向を理解することは、かつてないほど重要になっています。

「Microsoft のカスタマーサービスのグローバル状況レポート (2017 年)」によると、全世界の顧客の 96% が、ブランドの選択やブランドに対するロイヤリティにおいて、カスタマーサービスが影響しているとしています。この 32 ページのレポートでは、5,000 名の調査対象者に関するインサイトが提示されています。

インテリジェントなカスタマーサービスによって、競争上の優位性を維持し、永続的な顧客を獲得する方法をご確認ください。以下の事項に関する詳細および実践的なデータをご覧いただけます。

 

ダウンロードはこちらから

 

Exklusivní pozvánka na Azure Red Shirt Dev konferenci v Německu

$
0
0

Klikněte pro přesměrování na oficiální stránky Azure Red Shirt Dev tour.

Azure Red Shirt Dev konference je prvotřídní Microsoft event vytvořený pro vývojáře. Tato konference je pět hodin živého programování s panem Scottem Guthriem v červeném triku, který je výkonným viceprezidentem společnosti Microsoft Cloud and Enterprise. Scott Vám z pódia předvede, jak Azure může pomoci vyřešit Vaše nejvíce komplexní vývojářské problémy. Tato událost se bude konat v Mnichově dne 18. ledna, 2018. Zajistili jsme pro Vás speciální autobus, který pojede z Bratislavy do Mnichova, abyste se mohli zúčastnit tohoto eventu pro vývojáře, který bude vůbec poprvé zdarma! Chtěli bychom Vás tímto pozvat na tuto speciální noční jízdu autobusem.

  • Odjezd z Bratislavy, Trnavské Mýto u kulturního centra Istropolis ve 23:50, 17. ledna.
  • Odjezd z Prahy, Hlavní nádraží je 18. ledna ve 2:00 ráno.

Tato konference je dostupná i pro účastníky z Brna. Tato událost se bude konat v Berlíně 17. ledna, 2018. I pro Vás je zajištěn speciální autobus.

  • Odjezd z Brna, Hlavní nádraží je 16. ledna ve 23:30.

Předpokládaný příjezd na konferenci:

  • v Mnichově je 18. ledna v 7:30 ráno, konference začíná v 8:45.
  • v Berlíně je 17. ledna v 7:30 ráno, konference začíná v 8:45.

Stejný autobus bude připraven Vás vzít zpět do místa určení v tentýž den, odjezd je plánován na 15:00. Prosíme, abyste byli vždy na místě setkání minimálně 15 minut před odjezdem. Zarezervujte si místo v autobusu již nyní přes registraci níže, vzhledem k limitované kapacitě míst. Prosíme, abyste se registrovali co nejdříve.

REGISTRUJTE SE NYNÍ

Doporučujeme, abyste si zařídili cestovní pojištění ještě před odjezdem. Jestli se za jakýchkoliv okolností nebudete moci zúčastnit, prosíme kontaktujte Silvii Ragancikovou, abychom mohli Vaše místo přenechat někomu dalšímu.

Бесплатные вебинары по разработке приложений для Office 365

$
0
0

Хотите узнать всё о разработке приложений для Office 365? Регистрируйтесь на наши бесплатные онлайн-тренинги и повысьте свои профессиональные навыки.

Тренинги состоятся 6 и 7 февраля. Нажмите на название тренинга, чтобы перейти к регистрации.

6 февраля, День первый: Основы разработки

  1. Начало работы над разработкой приложений для Office 365
  2. Введение в разработку приложений для Office 365: Брендирование ваших приложений в пользовательском интерфейсе Office
  3. Создание приложений с PowerApps и Microsoft Flows

7 февраля, День второй: Погружение в разработку

  1. Разработка приложений для Office 365: спроси эксперта
  2. Создание приложений с использованием Microsoft Graph
  3. Создание надстроек для SharePoint Online и Office Online

Регистрируйтесь на тренинги, чтобы узнать о разработке приложений для Office 365. Поделитесь ссылкой на этот пост с коллегами и друзьями, которым может быть интересна разработка приложений для Office 365.

Тренинги будут вестись на английском языке.

Новые возможности обучения в 2018 году!

$
0
0

Новое! Прогрессивные скидки на экзамены MCP.

Microsoft представляет новое предложение – прогрессивные скидки. Чем больше MCP экзаменов Вы сдаете в течение 9 месяцев, тем большую скидку получаете – до 50%! Как это работает:

  • Оплатите первый MCP экзамен по стандартной цене.
  • Получите 25% скидку на второй экзамен.
  • Получите 50% скидку на 3 экзамен.
  • После 3 экзамена Вы получите 50% скидку на каждый последующий экзамен, сданный в течение 9 месяцев со времени сдачи первого.

Полное описание предложения

Новое! Сэкономьте на материалах для подготовки к экзаменам.

Обучающиеся могут сэкономить на материалах для подготовки к экзаменам. Это предложение уже доступно! Купите 3 или более практических теста MeasureUp на mindhub и получите 30% скидку с заказа на практические тесты MeasureUp.

Предложение действительно с 4 декабря 2017 до 30 апреля 2018. Практические тесты могут быть использованы в течение 1 года с даты покупки и доступны для занятий в течение 30 дней со дня активации. Скидка применяется автоматически, коды для скидки не требуются. Предложение недействительно для экзаменационных ваучеров или пакетированных предложений.

Подробнее о предложении

Сертифицируйся. Получи футболку в подарок!

Студенты получат бесплатную футболку, если сдадут любой экзамен в авторизованном центре тестирования Pearson VUE или online.

Это прекрасная возможность для студентов использовать ваучеры Exam Replay и Azure Skills — и начать получать  прогрессивные скидки на MCP экзамены! Все заявки должны быть оформлены до 5 февраля 2018. Зарегистрируйтесь, чтобы получить Вашу бесплатную футболку на странице этого предложения. Количество ограничено. Дизайн футболки может быть изменен.

The backup could not be started because of an unexpected error in virtual disk service (0x80070057) (0x086C6)

$
0
0

This article describes the symptoms, cause and resolution for an issue that occurs on MARS Agent installations within Azure Virtual Machines.

Symptoms

Scheduled or adhoc backups on the MARS Agent installed in Azure Virtual Machines fail with the following error:
The backup could not be started because of an unexpected error in the virtual disk service. Restart the virtual disk service and try the backup operation again. If the issue persists, check the system event log for virtual disk service events. (0x086C6)

Here is the screenshot of the error on the MARS Agent job details dialog box:

Here is a screenshot of the error detail blade for the failed backup job on the Azure Portal:

Cause 

The reason for the error is due to transient issues that are observed in the Windows Server/Client storage layers that prevent the backup metadata VHD from being mounted properly.  

While we, at Microsoft are actively looking to fix this issue, please refer to the resolution below to mitigate this problem and ensure successful backups.  

 

Resolution 

The resolution is to move the MARS Agent scratch or cache location to a non-OS disk. During and after this move your backups, and their policy and schedule will be retained.  

Follow the steps below to move the scratch location to a Non-OS Disk 

  1. Download and install the latest MARS Agent version 
  2. Ensure you have a non-OS Disk attached to your Azure VM. This needs to be a different disk from the Temp disk.  
  3. Stop the Backup engine by executing the following command in an elevated command prompt: 
PS C:> Net stop obengine  
  1. Locate the Scratch folder. Typically, it is at C:Program FilesMicrosoft Azure Recovery Services AgentScratch 
  2. Do not move the files. Instead, copy the scratch-space folder to the non-OS disk with sufficient space.  
  3. Update the following registry entries with the path to the new cache space folder. 
Registry path 
Registry Key 
Value 
HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindows Azure BackupConfig 
ScratchLocation 
New cache folder location 
HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindows Azure BackupConfigCloudBackupProvider 
ScratchLocation 
New cache folder location 
  1. Restart the Backup engine by executing the following command in an elevated command prompt: 
PS C:> Net start obengine 

Once the backup creation is successfully completed in the new cache location, you can remove the original cache folder. 

If your Azure issue is not addressed in this article, visit the Azure forums on MSDN and the Stack Overflow. You can post your issue in these forums, or post to @AzureSupport on Twitter. You also can submit an Azure support request. To submit a support request, on the Azure

 

Kalenderwoche 2/2018 im Rückblick: Zehn interessante Links für IT-Experten

$
0
0

Using Azure Automation, OMS and Storage Tables to capture Configuration Data of Azure VMs Part Two

$
0
0

Well it’s been a long time since part one of this blog and much has changed with OMS Log Analytics. Namely, the query language has transitioned to Kusto and the webhook format that OMS sends to Azure Automation has changed.

The webhook payload change is very different if the Log Analytics workspace has been upgraded to Kusto (or if it is a new workspace). Examples of this payload can be found here. The method to consume this webhook data in PowerShell is below. The meat of it is lines 67-83, but just as important (and similar to the old method) is accepting the webhook payload via a PowerShell parameter.

[sourcecode language='powershell'  padlinenumbers='true' wraplines='false']
<#

     .SYNOPSIS

         vmcreate_v2.0.ps1 is an Azure Automation Powershell Runbook
       .DESCRIPTION

     This script recieves webhook data from OMS based on Azure Activity Logs recording a VM create
     It will record the basic CMDB data and write it to the automation account output and to an azure storage table with the write-cmdbdata function


    .EXAMPLE
          This should be called by OMS based on an activity log search. See blogs.technet.microsoft.com/knightly



    .NOTES
  v 1.2 checks for vnet peering and only writes to the table if the vnet is peered. It also records the source image name.
  v 2.0 is updated for new webhook format of an array of tables containing the vm information
   #>


param (
    [object]$WebhookData
)

$RequestBody = ConvertFrom-JSON -InputObject $WebhookData.RequestBody
$connectionName = "AzureRunAsConnection"
try
{
    # Get the connection "AzureRunAsConnection "
    $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName

    "Logging in to Azure..."
    Add-AzureRmAccount `
        -ServicePrincipal `
        -TenantId $servicePrincipalConnection.TenantId `
        -ApplicationId $servicePrincipalConnection.ApplicationId `
        -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
}
catch {
    if (!$servicePrincipalConnection)
    {
        $ErrorMessage = "Connection $connectionName not found."
        throw $ErrorMessage
    } else{
        Write-Error -Message $_.Exception
        throw $_.Exception
    }
}

#Get all metadata properties
$AlertRuleName = $RequestBody.AlertRuleName
$AlertThresholdOperator = $RequestBody.AlertThresholdOperator
$AlertThresholdValue = $RequestBody.AlertThresholdValue
$AlertDescription = $RequestBody.Description
$LinktoSearchResults =$RequestBody.LinkToSearchResults
$ResultCount =$RequestBody.ResultCount
$Severity = $RequestBody.Severity
$SearchQuery = $RequestBody.SearchQuery
$WorkspaceID = $RequestBody.WorkspaceId
$SearchWindowStartTime = $RequestBody.SearchIntervalStartTimeUtc
$SearchWindowEndTime = $RequestBody.SearchIntervalEndtimeUtc
$SearchWindowInterval = $RequestBody.SearchIntervalInSeconds

# Get detailed search results
if($RequestBody.SearchResult -ne $null)
{
    $SearchResultRows    = $RequestBody.SearchResult.tables[0].rows
    $SearchResultColumns = $RequestBody.SearchResult.tables[0].columns;

    foreach ($SearchResultRow in $SearchResultRows)
    {
        $Column = 0
        $Record = New-Object -TypeName PSObject

        foreach ($SearchResultColumn in $SearchResultColumns)
        {
            $Name = $SearchResultColumn.name
            $ColumnValue = $SearchResultRow[$Column]
            $Record | Add-Member -MemberType NoteProperty -Name $name -Value $ColumnValue -Force
            $Column++
        }
$resourceID = $record.resourceID
$vmname = $record.resource
$rgname = $record.resourcegroup
$subID = $record.subscriptionID
$caller = $record.caller
write-output $vmname + 'in resource group ' + $rgname 'in sub' + $subID + 'was created'
Select-AzureRmSubscription -SubscriptionId $SubId
$vminfo = Get-AzureRmvm -Name $vmname -ResourceGroupName $Rgname
$vmsize = $vminfo.HardwareProfile.vmsize
$nic = $vminfo.NetworkProfile.NetworkInterfaces
$string = $nic.id.ToString()
$nicname = $string.split("/")[-1]
$ipconfig = Get-AzureRmNetworkInterface -ResourceGroupName $rgname -Name $nicname
$subnet = $ipconfig.ipconfigurations.subnet.id.ToString()
$ipconfig = $ipconfig.IpConfigurations.privateipaddress
$vnet = $subnet.split("/")[-3]
$name = $vminfo.Name
$ostype = $vminfo.StorageProfile.OsDisk.OsType
$location = $vminfo.location

#imageref is null if marketplace image was used
$imageref = $vminfo.StorageProfile.ImageReference.id
            if ($imageref -ne $null)
                {$sourceimg = $imageref.Split("/")[-1]}
                   else {$sourceimg = 'marketplace'}
$subname = (Get-AzureRmSubscription -SubscriptionId $subid).SubscriptionName


#check to see if the VM is on an ER conencted VNET by checking its peering
$peer=Get-AzureRmVirtualNetwork -Name $vnet -ResourceGroupName $rgname
$peer= $peer.VirtualNetworkPeerings

if ($peer.count -gt 0) #only write this data to the storage table if the network is peered (only peer'd vnets can talk to ER vnets)
  {




    #writing output into the automation account for debugging
    write-output "$vmsize $Ipconfig $location $name $ostype $caller $timestamp $subname, $Vnet, $LID"

    #once VM information is collected, it can be written into a storage table
    Select-AzureRmSubscription -SubscriptionName 'Sub1' #this should be the subscription that owns the storage account, not where the VM is deployed
    $resourceGroup = "RGNAME" #resource group that contains the storage table
    $storageAccount = "cmdbtable" #storage account that contains the table
    $tableName = "CMData"
    $saContext = (Get-AzureRmStorageAccount -ResourceGroupName $resourceGroup -Name $storageAccount).Context
    $table = Get-AzureStorageTable -Name $tableName -Context $saContext


    #search the storage table to see if the VM already exists
    [string]$filter1 = [Microsoft.WindowsAzure.Storage.Table.TableQuery]::GenerateFilterCondition("ResourceID", [Microsoft.WindowsAzure.Storage.Table.QueryComparisons]::Equal, "$resourceID")
    $new = Get-AzureStorageTableRowByCustomFilter -table $table -customFilter $filter1
    if ($new -eq $null) {
        $partitionKey = "VMcreates"
        Add-StorageTableRow -table $table -partitionKey $partitionKey -rowKey ([guid]::NewGuid().tostring()) -property @{"SourceIMG" = "$sourceIMG"; "SubscriptionID" = "$subid"; "ResourceGroup" = "$rgname"; "ResourceID" = "$resourceID"; "computerName" = "$vmname"; "ostype" = "$ostype"; "CreatorID" = "$caller"; "PrivateIP" = "$IPconfig"; "Vnet"="$Vnet";"Location" = "$Location"}

    }
    else {
        $partitionKey = "VMUpdates"
        Add-StorageTableRow -table $table -partitionKey $partitionKey -rowKey ([guid]::NewGuid().tostring()) -property @{"SourceIMG" = "$sourceIMG"; "SubscriptionID" = "$subid"; "ResourceGroup" = "$rgname"; "ResourceID" = "$resourceID"; "computerName" = "$vmname"; "ostype" = "$ostype"; "CreatorID" = "$caller"; "PrivateIP" = "$IPconfig";"Vnet"="$Vnet"; "Location" = "$Location"}

    }

}}  }



[/sourcecode]

WSFC 環境の NFS サーバーに対して 40 台以上のクライアントがロックを解放しないままフェールオーバーすると STOP エラー 0x9e が発生することがある

$
0
0

こんにちは。Windows プラットフォーム サポートです。

本稿では、WSFC 環境の NFS サーバーで特定の条件下においてフェールオーバーした場合、STOP エラー 0x9e が発生する現象について確認しております。

[現象]
Windows (Storage) Server 2012/Windows (Storage) Server 2012 R2/Windows (Storage) Server 2016 の Windows Server フェールオーバー クラスタリング (WSFC) 環境で、NFS サーバーに対して、40 台以上の NFS クライアントからロックを保持し、正しくロックを解放しないまま疎通が取れなくなった状態で、NFS サーバーのフェイルオーバーを連続して 2 回実施した場合、STOP エラー 0x9e (USER_MODE_HEALTH_MONITOR) が発生する可能性があります。

 

[原因]
この現象は以下のシナリオで発生します。

1. NFS クライアントからクラスター ノード A 上の NFS サーバーにアクセスし、ファイルをロックします。
その後ファイルのロックを解放しないままネットワークから切断します。このような端末が 40 台存在すると想定します。
2. ノード A からノード B に NFS サーバーをフェイルオーバーします。
3. ノード B で NFS サーバーが起動後、ロックを保持する各 NFS クライアントに UDP 111 番ポートでの通信を試みます。
4. さらにノード B で NFS サーバーを別のノードにフェイルオーバーします。

上記の項番 3 でファイルのロックを解放しなかったクライアントに通信を試行してレスポンスを待機する間、項番 4 で NFS サーバーをオフラインにしようとする処理が項番 3 の完了待ちとなります。
この完了待ちにて 20 分以上時間を要した場合、クラスターの Health Monitoring のタイムアウト (20 分間) により STOP エラー 0x9e (USER_MODE_HEALTH_MONITOR) が発生します。
これは、ロックを未解放のクライアントに対して一台当たり最大 30 秒待機するため、NFS サーバーから疎通できないクライアントが約 40 台存在すると 20 分処理以上の時間を要するために発生します。

 

[回避策]
フェールオーバーの実施前に NFS サーバーで保持されているロックを強制的に解放することで回避できます。

- 現在オープン中のロックを確認するコマンド

Get-NfsClientLock

- 全てのロックを解放するコマンド

Revoke-NfsClientLock -Path *

- 影響について

上記回避策を実施した場合、NFS サーバー側ではロックが強制的に解放され、クライアントで側はロックを保持していると認識された状態となります。
この場合、別のクライアントが同一のファイルに対するロックを要求した際には、別のクライアントがロックを保持することができます。
また、最初にロックを保持していたクライアントから、そのファイルへのアクセスを行った場合に、クライアントアプリケーションによってはエラーが発生する等の影響が考えられます。
既にオフラインになっているクライアントについては、ロックの強制解放を行っても影響はありません。

 

[状況]
現在、調査中です。

本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

Deployment of Windows 10 Updates using System Center Configuration Manager Current Branch

$
0
0

A question I get regularly asked is how to manage Windows 10 updates via System Center Configuration Manager. In this blog post I will explain the different options as well as the basic configuration of these options. The assumption is made that you are familiar of the ConfigMgr update deployment functionalities. Before explaining how to manage Windows 10 updates with ConfigMgr we need to make a distinction between the different update types. With the introduction of Windows 10 we can separate updates into two types:

  • Quality updates: Monthly quality rollups with quality improvements on existing functionalities of Windows 10 including Security updates.
  • Feature updates: two yearly release of Windows 10 with new functionalities and improvements.

More information about Windows as a service and the difference between the separate updates can be found here.

Prerequisites:

Before we can deploy these updates with ConfigMgr the right catalog need to be selected, before selecting the catalog the prerequisites need to be in-place. For ConfigMgr the prerequisite is that WSUS is installed and working correctly and before syncing and deploying feature updates as a minimum the July monthly rollup or higher (quality rollup updates are superseding) for Windows Server 2012 and 2012 r2 need to be installed. These rollups provide the capabilities of earlier updates: KB3095113, and KB3159706. After installing the quality rollup the 'wsusutil.exe postinstall /servicing' command need to be applied to enable ESD decryption. Please note when running Windows Server 2016 these updates are not needed to synchronize the upgrade classification catalog.

Initial Configuration

After installing the prerequisites, we can select the right catalog from ConfigMgr. From the ConfigMgr console: Administration -> Sites -> select the site server -> Configure site components -> Software Update Point Component properties -> tab classification here we Select:

  • Updates: To sync the quality updates;
  • Upgrades: To sync the catalog of feature updates.


After selecting the classification, we need to select the products:


Please note: when this is a new installation a first sync need to be accomplished before Windows 10 products are visible in the product list. A synchronization can be initiated via: ConfigMgr console -> Software Library -> Software Updates -> right click Synchronize Software Update. Synchronization can be monitored by reviewing the wsyncmgr.log.


After the initial synchronization is finished we can select the products in our case, this should be Windows 10. For selecting the products we need to go to: the Software Update Point Component Properties->tab products. Here we can select Windows 10 or make a narrowed selection to individual versions. In my case I select Windows 10 as a hole. After the selection is made and the synchronization is completed the updates should be visible in the console: ConfigMgr console -> Software Library -> all Software Updates section:

Deployment of: Quality updates.

The deployment of quality updates with SCCM can be done via the traditional way, by using Automatic deployment rules(ADR's) or manual deployments. From ConfigMgr1706 onwards there is an additional capability added to deploy Windows update for business policies. By using these policies, we can configure Windows Update for business and deferral settings. Configuring these setting can be accomplished by Group Policies and MDM settings as well. But please note that the behavior of Windows update for business is different!

  • Clients will download updates from Windows update for business(online);
  • SCCM is not able to report on compliancy as clients are not reporting back their compliance state;
  • By configuring these Windows update for business setting we configure deferral settings for quality updates as well as feature updates.

More information about this behavior can be found here more information about the more advance options can be found here.

Deployment of Feature Updates

The feature updates of Windows 10 can be deployment in two different ways, by using:

  • Windows 10 servicing functionality
  • An upgrade task sequence

A question I receive regularly is which solution I should use. Both are valid solutions, but servicing does have some considerations. Currently via the service plan language packs, compatibility pre-assessment, addition of additional drivers is not possible. Long story short: A upgrade task sequence gives you more flexibility due to the flexibility of adding manual steps and customize the upgrade process.

Windows 10 servicing:

Windows 10 servicing can be configured via the servicing section in the software library. Here we can create different servicing plans for the different deployment rings which you want to introduce in your environment. We can filter on languages and limit the number of servicing updates you will download and configure the delay configuration of the selected Semi-Annual Channel(Targeted) and Semi-Annual Channel. You are basically configuring an automatic deployment rule. Based on the delay configuration and collections selected the service plan is created and can run on a schedule automatically.

Upgrade task sequence

The upgrade task sequence is a separate task sequence option which can be created from the software library –> operating system –> task sequence section. Before creating this task sequence, we need to add the operating system upgrade package to the software library. For a normal task sequence, a .wim file will be used in this scenario we need to use the media of the release of Windows 10 were you want to upgrade to, in my example this is Windows 10 1709. During the upgrade task sequence a Windows 10 setup will be initiated with the appropriated commands. The power of this way of upgrading Windows 10 to a newer release, is the flexibility and possibility to customize the upgrade. To add the operating system upgrade package, we are going to: Software Library -> Operating systems -> Operating System Upgrade Packages and click on Add operating system upgrade package.



Browse to the Windows 10 media content and add them to ConfigMgr. When the Operating System Upgrade Packages is added we can create an upgrade task sequence. To create an upgrade task sequence, we are going to: Software Library -> Task Sequences -> Create Task Sequence. In the create Task Sequence Wizard we can select "Upgrade an operating system from an upgrade packages" during the wizard we can select the operating system upgrade packages and add updates or applications when needed. Eventually we end up with a task sequence with three steps where we can add additional customization when needed.



This ends up this blog post, hope this is helpful, please leave questions or comments below.

Support Tip: Cross-Posting “Additional guidance to mitigate speculative execution side-channel vulnerabilities”

$
0
0

Sharing these posts (cross-posting) with the Intune community as these are informative links. The original posts below are being updated as needed, so check back on each post or sign up for RSS feeds, as relevant.

Deep Dive: VDI with Citrix Cloud on Microsoft Azure

$
0
0

Jeff Mitchell, Cloud Solution Architect, One Commercial Partner

Citrix builds on their long-standing partnership with Microsoft to offer multiple Virtual Desktop Infrastructure (VDI) deployment options for Citrix solutions on Microsoft Azure—including virtual apps, desktops, data and networking. Microsoft Cloud Solution Architect Jeff Mitchell shares how you can help customers provision and deliver workloads on the Microsoft Azure cloud platform, reducing overall IT costs and increasing efficiency.

Join the Applications and Infrastructure Community call on Friday, January 19th for an overview of Citrix Cloud on Azure.

Citrix Cloud

Citrix Cloud was introduced in August of 2015 amid much fanfare, as the titan of industry reassured us of their focus on delivering value to customers by decreasing management tasks and update cycles involved with the control layer of a Citrix deployment. Often, we speak about cloud being a model that delivers resiliency, scalability, and self-service, allowing organizations to deliver technology as a service to drive business value. Citrix Cloud is no different. By delivering Citrix Workspaces in the cloud model, IT organizations can deliver Citrix Workspaces to end users with the benefits that cloud offers.

Virtual applications and desktops are delivered using the Citrix Cloud XenApp and XenDesktop Service. Look under the hood of this service and you will see it is the same control layer you love with Studio, Director, StoreFront, and Delivery Controllers. In addition, SQL Server management and license management are built into the service and no longer require granular operations. So, you have a control plane that is maintained by Citrix, giving you the ability to centrally manage a user’s app and desktop resources running securely in your Azure subscription!

Citrix and Azure Design

Azure is a resource location on the Citrix Cloud Platform—a place to deploy XenApp and XenDesktop. We know that to have a successful Azure cloud adoption you need to consider five primary areas: Operations, Identity, Governance, Security, and Connectivity.

With Citrix, the same principles apply. Typical resource requirements in an Azure resource location will include Active Directory domains, NetScaler, Virtual Desktop Agents, and the Citrix Cloud Connector. Azure is fantastic as a resource location due to factors like proximity to end user, with 36 Azure regions around the world; as well as scale and security requirements, which are ideal on Azure thanks to deep integration with Azure Active Directory and Office 365.

The two most common scenarios I’ve seen are based on hybrid cloud models. First, adding Azure as a resource location to an existing on-premises deployment. In this scenario the customer has a current on-premises control layer and would like to add Azure as a resource location for new deployments or migration. The second is a greenfield deployment with Citrix Cloud delivering resource locations in Azure or multi-cloud. The customers in this scenario are often adopting Citrix for the first time or looking for multitenancy solution.

Deploying Citrix to Azure

I will cover a high-level process of getting started with Citrix on Azure. If I get some comments below or feedback on social posts, we can dive deeper.

First, set up your Citrix Cloud account, then add a resource location on the left-hand side of the screen. This will prompt you to download Cloud Connector. The Cloud Connector is used for securely managing resources in the resource location. This typically does not have to be large VM’s depending on the deployment size, but it is recommended to host two or more in the resource location for high availability (for details, see this Scalability White Paper).

It is recommended at this point to think about your Azure subscription as a resource location that needs the resource requirements discussed above. With Azure subscription resource requirements out of the way, you can now finish registrating the resource location in the Citrix Cloud portal. Confirm through Citrix Studio that the Cloud connector is listed under Zones. This ensures it is online and reporting in, and prevents deployment failures. Now right-click on the Azure Zone and create a new host connection. During this process a service principle is provisioned for you. If you run into issues you can configure this manually.

With the steps above complete, it’s time to set up a VDA. Create a VM Desktop OS or Server OS, depending on the catalog you want in Citrix Cloud. Install the Citrix VDA software on the VM and shut down the VM, making sure it is in a Stopped (deallocated) state. You can now run the Machine Catalog Setup.

Considerations and Tools

I normally start with a D2_V2 VDA setup. You can right-size through testing there. Based on Citrix testing, D2v2 is the optimal VM instance in terms of $/user/hour (assuming full utilization of the VDA).

  • Task workers: 19 ($0.015 user/hr)
  • Knowledge workers: 15 ($0.019 user/hr)

The XenApp/XenDesktop cost calculator for Azure, located here, will help you scope the appropriate VM size based on defining typical Citrix worker roles and bandwidth considerations.

Citrix is all in on Citrix Cloud. At Citrix Summit 2018, Craig Stilwell VP of worldwide Partner Sales Citrix announced refreshed and simplified large Cloud incentives available from Citrix.

Community call

If you'd like to hear more on this topic, register for the Applications and Infrastructure Community call on Friday, January 19th. The call will provide insight into what is most important in the Microsoft partner ecosystem. We'll have a conversational dialogue between two technology professionals that is designed to appeal to technical, sales, and business professionals. One presenter will discuss Citrix on Azure from an infrastructure perspective, the other from an application development perspective.

 For those of you looking for additional deep dive, join myself and Citrix Enterprise Architect Kevin Nardone on Thursday, February 1st to hear the top lessons learned when deploying Citrix on Azure. Webinar registration will open next Friday; however, you can visit bit.ly/CitrixTIPs now to subscribe to the series and receive notification once it goes live. At the conclusion of the event, Kevin and I will open the line for Q&A to answer your cloud questions, live!

Register for the Citrix TIPs webinar and Q&A

Applications and Infrastructure Technical Community

Azure Batch for the IT Pro – Part 1

$
0
0

I spent some time on working with Azure Batch for a customer, and what struck me that it was not so easy for an IT Pro to create a meaningful testing setup. The stumbling point is that you need to have an application doing meaningful work.

So what is Azure Batch? It is the PaaS version of High Performance Computing (HPC). Azure gives you the infrastructure, you give it the application and define the tasks and jobs telling Azure what to do. Azure Batch is suited to intrinsically parallel workloads, such as scientific model calculations, video rendering, data reduction, etc.

My goal here is to set up an example that means something to the IT Pro who, like me, is familiar with Powershell but does not write .NET code in Visual Studio every day. The project is hosted on my Azure-Batch repo on Github.

Overview Azure Batch walkthrough based on a Powershell application

This Azure Batch walkthrough creates a Batch account, storage account, package, a pool of VMs, and executes a job with multiple tasks to generate decimal representations of Mersenne prime numbers. If you just want to do the walkthrough, do the following steps; if you also want the why and how, read the background information as well.

  1. Make sure you have access to an Azure subscription. You will need to type credentials of at least Contributor level.
  2. Install the latest Azure Powershell modules.
  3. Download the two toplevel scripts Create-BatchAccountMersenne.ps1 and Create-BatchJobsAndTasks.ps1.
  4. Run Create-BatchAccountMersenne.ps1.
  5. Open the Azure Portal, locate for the (default) resource group named "rg-batch-walkthrough", and inspect it a bit.
  6. Run Create-BatchJobsAndTasks.ps1. This might take a while to complete.
  7. Inspect the jobs, tasks, and task output.
  8. Locate the Storage Account, File Server, a share called (default) "mersenneshare", and you should have the Mersenne primes right there.

This ends the walkthrough steps. Next up is a discussion of the Powershell code needed to create the Azure Batch infrastructure.

Creating the Batch Account using Powershell

The full script to create the Azure Batch walkthrough account is Create-BatchAccountMersenne.ps1 . The code below is straight from the script, stripped from comments and additions. You can either execute the script itself after downloading, or run the snippets below one-by-one.

First, log on and select the appropriate subscription.

Add-AzureRmAccount
# select a subscription using select-azurermsubscription, if needed.

We need some definitions to lock down the configuration. They mostly speak for themselves. Important to note is that the name Mersenne is used everywhere and should not be changed without a thorough investigation. The variable PackageURL is a URL to download the actual (zip) package containing the code to be run. In this case, the application code is Powershell script. Feel free to host the ZIP file wherever you like. The hash definition $poolWindowsVersion is the human-readable definition of the Windows version to use for the pool VMs.

$ResourceGroupName = "rg-batch-walkthrough"
$Region = "Central US"
$BatchAccountNamePrefix = "walkthrough"
$WindowsVersion = "2016"

$Applicationname = "Mersenne"
$PoolName = "Pool1"
$ShareName ="mersenneshare"
$Nodecount = 2
$PackageURL = "https://github.com/wkasdorp/Azure-Batch/raw/master/ZIP/MersenneV1.zip"
$poolWindowsVersion = @{
    "2012"     = 3
    "2012R2"   = 4
    "2016"     = 5
}

The next bit is interesting: we need some worldwide unique names, preferably without having the user specify them or trying variations until we get a good one. For this, we use a function that takes the ID of a resource group and mangles this to a semi-random number. Take a look at an earlier post that explains this function in detail.

function Get-LowerCaseUniqueID ([string]$id, $length=8)
{
    $hashArray = (New-Object System.Security.Cryptography.SHA512Managed).ComputeHash($id.ToCharArray())
    -join ($hashArray[1..$length] | ForEach-Object { [char]($_ % 26 + [byte][char]'a') })
}

The business part starts here. First, we need a Resource Group. Because I anticipate that this part of the code may need to be re-run a couple of times, I made it restartable (meaning, do the smart thing if the Resource Group exists):

$ResourceGroup = $null
$ResourceGroup = Get-AzureRmResourceGroup -Name $ResourceGroupName -ErrorAction SilentlyContinue
if ($ResourceGroup -eq $null)
{
    $ResourceGroup = New-AzureRmResourceGroup –Name $ResourceGroupName -Location $Region -ErrorAction Stop
}

An Azure Batch accounts needs a Storage Account to store packages and data, so we first create the Storage Account, and then provision the Batch Account with a reference to the Storage Account. We also create an SMB share in the Storage Account, because an SMB share is easy to write to from Powershell running in the pool VMs. This part is also restartable, which is why we end up with explicitly retrieving the Azure Batch context. Note the use of function Get-LowerCaseUniqueID to determine the names of the Batch and Storage accounts.

$BatchAccountName = $BatchAccountNamePrefix + (Get-LowerCaseUniqueID -id $ResourceGroup.ResourceId)
$StorageAccountName = "sa$($BatchAccountName)"
$BatchAccount = $null
$BatchAccount = Get-AzureRmBatchAccount –AccountName $BatchAccountName –ResourceGroupName $ResourceGroupName -ErrorAction SilentlyContinue
if ($BatchAccount -eq $null)
{
    $StorageAccount = New-AzureRmStorageAccount -ResourceGroupName $ResourceGroupName -Name $StorageAccountName -SkuName Standard_LRS -Location $Region -Kind Storage
    $Share = New-AzureStorageShare -Context $StorageAccount.Context -Name $ShareName
    $BatchAccount = New-AzureRmBatchAccount –AccountName $BatchAccountName –Location $Region –ResourceGroupName $ResourceGroupName -AutoStorageAccountId $StorageAccount.Id
}
$BatchContext = Get-AzureRmBatchAccountKeys -AccountName $BatchAccountName -ResourceGroupName $ResourceGroupName

Next up is to create an application generating some data. As mentioned, the application is pre-packaged as a ZIP file containing two Powershell scripts. We download this into a temporary file, and then generate a new Azure Batch application definition in the existing account. To make life easier for application management, we explicitly define a default version ("1.0").

$tempfile = [System.IO.Path]::GetTempFileName() | Rename-Item -NewName { $_ -replace 'tmp$', 'zip' } –PassThru
Invoke-WebRequest -Uri $PackageURL -OutFile $tempfile
New-AzureRmBatchApplication -AccountName $BatchAccountName -ResourceGroupName $ResourceGroupName -ApplicationId $applicationname
New-AzureRmBatchApplicationPackage -AccountName $BatchAccountName -ResourceGroupName $ResourceGroupName -ApplicationId $applicationname `
    -ApplicationVersion "1.0" -Format zip -FilePath $tempfile
Set-AzureRmBatchApplication -AccountName $BatchAccountName -ResourceGroupName $ResourceGroupName -ApplicationId $applicationname -DefaultVersion "1.0"

Finally, we create a pool of VMs that will be used to run the package. I made a number of design choices here.

  • I used a cloud service (PaaS) VM. This is faster to deploy, but more limited in functionality. For instance, we are limited to Windows although Azure Batch supports Linux as well.
  • The pool has dedicated VMs which run as long as the pool exists. Again, this is faster for testing, but also more expensive. The alternative is to use low priority nodes that get provisioned when needed. Also, it is possible to increase or decrease the number of nodes in pool.
$appPackageReference = New-Object Microsoft.Azure.Commands.Batch.Models.PSApplicationPackageReference
$appPackageReference.ApplicationId = $applicationname
$appPackageReference.Version = "1.0"
$PoolConfig = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSCloudServiceConfiguration" -ArgumentList @($poolWindowsVersion[$WindowsVersion],"*")
New-AzureBatchPool -Id $PoolName -VirtualMachineSize "Small" -CloudServiceConfiguration $PoolConfig `
    -BatchContext $BatchContext -ApplicationPackageReferences $appPackageReference -TargetDedicatedComputeNodes $Nodecount

After executing this code, the Azure Fabric provisions two VMs in the pool. This should take about 10-20 minutes. After the nodes have fully initialized you should have the Azure Portal looking something like the following screenshot. There are zero task states, obviously, and two running nodes.

Azure Batch Account in the Portal

At this point, we are ready to run the application package, Mersenne. Doing this is slightly involved because we need to create a Batch Job, with a task for each Mersenne number to be calculated. The command line used to start the Powershell script in the VMs needs to be constructed as well. This is the subject of the next post in this short series.

Application code: decimal version of Mersenne Prime Numbers

This section really is nothing more than background information on the sample application. All you need to know is that it generates prime numbers.

The sample application for for this walk-through is a very simple but CPU-intensive calculation. Throughout known history people have searched for ever larger prime numbers, which are numbers that have no divisors except themselves. You know: 2, 3, 5, 7, 11, 13, 17, ... etc. There is an infinity of them. Currently, the largest known prime numbers are Mersenne numbers. I explained this a little bit in the readme.md on Github. The very short summary: a Mersenne number is an exponential of 2, minus one. The exponent is always a prime number. Examples: 27-1 = 127, 231-1 = 2147483647, and the currently (1-3-2018) largest one: 277232917-1 = <huge number>.

The Mersenne application packed in the ZIP file MersenneV1.zip simply calculates (some) of these numbers. The following is slightly simplified from the version in the package. You can paste this in a Powershell ISE windows, and it will just work:

# left out argument handling, using Read-Host instead.
$MersenneExponents = @(
    2, 3, 5, 7, 13,
    17, 19, 31, 61, 89,
    107, 127, 521, 607, 1279,
    2203, 2281, 3217, 4253, 4423,
    9689, 9941, 11213, 19937, 21701,
    23209, 44497, 86243, 110503, 132049,
    216091, 756839, 859433, 1257787, 1398269,
    2976221, 3021377, 6972593, 13466917, 20996011,
    24036583, 25964951, 30402457, 32582657, 37156667,
    42643801, 43112609, 57885161, 74207281, 77232917
)

function PrintMersenneDecimal ([int] $n, $width = 80)
{
    $prime = [numerics.biginteger]::pow(2,$n)-1
    $s = $prime.ToString()
    "Mersenne prime 2^$n-1 has $($s.length) digits."
    for ($n=0; $n -lt $s.length; $n += $width)
    {
       $s.Substring($n, [math]::min($width, $s.Length - $n))
    }
}

$index = $(Read-Host -Prompt "Which Mersenne number to calculate? (0-$($MersenneExponents.count-1))")
PrintMersenneDecimal -n $MersenneExponents[$index]

Simple enough, I guess. The part doing the actual work is a oneliner based on a .NET library: [numerics.biginteger]::pow(2,$n)-1.

Let's move on the final part, a glue script called generate_decimal_mersenne_and_upload.ps1. Its purpose is to serve as an interface between the Azure Batch infrastructure and the code doing the actual work. It reads the arguments passed to the Azure Batch Tasks, and takes care of writing the resulting data back to the SMB share defined on the Storage Account. Simplified version:

[CmdletBinding()]
Param
(
    [int] $index,
    [string] $uncpath,
    [string] $account,
    [string] $SaKey
)

$batchshared = $env:AZ_BATCH_NODE_SHARED_DIR
$batchwd = $env:AZ_BATCH_TASK_WORKING_DIR
$outfile =  (Join-Path $batchwd "Mersenne-$($index).txt")

$generateMersenne = "$env:AZ_BATCH_APP_PACKAGE_MERSENNEcalculate_print_mersenne_primes.ps1"

&$generateMersenne -index $index > $outfile

New-SmbMapping -LocalPath z: -RemotePath $uncpath -UserName $account -Password $SaKey -Persistent $false
Copy-Item $outfile z:

The main points to note are the mandatory use of environment variables for file and directory paths, the fact that some variable names are based on the actual package name (Mersenne), and the literal reference to the script calculating the Mersenne primes: calculate_print_mersenne_primes.ps1.

Next step: submit a job to the Azure Batch account.

In this post I have shown you step-by-step how to create an Azure Batch account using Powershell. The next step is to create a batch job to run the Mersenne test application on the pool nodes. For this, continue to part 2:

Azure Batch for the IT Pro – Part 2

$
0
0

This is the second and final part of a blog series with a walkthrough for Azure Batch. The first part is here:

In the first part I showed you how to create an Azure Batch Account, the corresponding Storage Account, a test application based on Powershell, and a pool of VMs to run the application. In this second part we will actually do the work of calculating Mersenne Prime Numbers.

At this point, you should have a working Azure Batch account with a pool of VMs and a provisioned application. The first step is to retrieve the configuration.

Retrieve the Azure Batch configuration

Note that the code shown in this post is working, but was taken from Create-BatchJobsAndTasks.ps1 and simplified: no logging or error checking. Refer to the Azure-Batch GitHub project for the full scripts.

The code below is straight from the script, stripped from comments and additions. You can either execute Create-BatchJobsAndTasks.ps1 after downloading, or run the snippets below one-by-one.

First, the configuration parameters. These should match the ones from the creation script discussed in the first post, Create-BatchAccountMersenne.ps1. The indexes here refer to Mersenne numbers; 20 is not high. Anything larger than 30 will take a measurable amount of time, indexes above 40 will take hours or days (and generate numbers with millions of digits).

$firstindex = 1
$lastindex = 20 # current maximum index.
$jobnamePrefix = "mersenne-job"
$tasknameprefix = "task"

$Applicationname = "Mersenne"
$ResourceGroupName = "rg-batch-walkthrough"
$BatchAccountNamePrefix = "walkthrough"
$ShareName ="mersenneshare"

The following helper function is needed to reconstruct the actual names of the Azure Batch and Storage account, as explained in the first post.

function Get-LowerCaseUniqueID ([string]$id, $length=8)
{
    $hashArray = (New-Object System.Security.Cryptography.SHA512Managed).ComputeHash($id.ToCharArray())
    -join ($hashArray[1..$length] | ForEach-Object { [char]($_ % 26 + [byte][char]'a') })
}

Read the Resource Group, reconstruct the Azure Batch account name from the Resource Group ID, and read the access keys of the Batch Account.

$ResourceGroup = Get-AzureRmResourceGroup -Name $ResourceGroupName -ErrorAction stop
$BatchAccountName = $BatchAccountNamePrefix + (Get-LowerCaseUniqueID -id $ResourceGroup.ResourceId)
$batchaccount = Get-AzureRmBatchAccount | Where-Object { $_.AccountName -eq $batchaccountName } -ErrorAction stop
$batchkeys = $batchaccount | Get-AzureRmBatchAccountKeys

We need the storage account to reconstruct the parameters ($Storagekey, $uncPath, $shareAccount) needed to access its SMB share. These parameters must be passed to the application what we will run on the nodes.

$StorageAccountName = "sa$($BatchAccountName)"
$StorageAccount = Get-AzureRmStorageAccount -ResourceGroupName $ResourceGroupName -Name $StorageAccountName -ErrorAction stop
$StorageKey = ($StorageAccount | Get-AzureRmStorageAccountKey)[0].Value
$Share = Get-AzureStorageShare -Name $ShareName -Context $StorageAccount.Context
$uncPath = $Share.Uri -replace 'https://','\' -replace '/',''
$shareAccount = "AZURE$($StorageAccount.StorageAccountName)"

The final step before we can do real work is to retrieve the configuration of the VM pool. We will submit the job and tasks to this pool.

$pool = Get-AzureBatchPool -BatchContext $batchkeys | Where-Object { $_.State -eq "Active" }
$PoolInformation = New-Object -TypeName "Microsoft.Azure.Commands.Batch.Models.PSPoolInformation"
$PoolInformation.PoolId = $pool.Id

Create Batch Job and Tasks

A job is basically a container for tasks. It has a name, can be disabled or enabled, and is dedicated to a pool in an Azure Batch account. The name contains a timestamp, making it more convenient to track what happened and when. If job creation fails for some reason, we require a hard stop.

$jobnamePostfix = (Get-Date -Format s) -replace ':', ''
$jobname = "$jobnamePrefix-$jobnamePostfix"
New-AzureBatchJob -BatchContext $batchkeys -Id $jobname -PoolInformation $PoolInformation -ErrorAction Stop

Once the job exists, tasks can be submitted. The tricky part is to construct the commandline argument. We need to do a little pre-work: creating a unique name using a random generator, constraining the task to have 3 retries on failure, and retrieving the application definition to run.

$taskPostfix = Get-Random -Minimum 0 -Maximum 1000000
$constraints = New-Object Microsoft.Azure.Commands.Batch.Models.PSTaskConstraints -ArgumentList @($null,$null,3)
$batchapp = Get-AzureRmBatchApplication -AccountName $batchaccountName -ResourceGroupName $batchaccount.ResourceGroupName -ApplicationId $Applicationname -ErrorAction stop
$version = $batchapp.DefaultVersion
$appref = New-Object Microsoft.Azure.Commands.Batch.Models.PSApplicationPackageReference
$appref.ApplicationId = $batchapp.id
$appref.Version = $version

Finally, tasks are submitted to the VM Pool. For each Mersenne prime to be calculated, a new task is created. Each task gets put into a queue. The queue is used to submit tasks to Pool nodes (VMs). A next task gets scheduled only after successful completion, or definite failure of the previous task. Non-definite task failures are simply retried.

The important bits: $ps1file is a hardcoded reference to the glue script generate_decimal_mersenne_and_upload.ps1. This script accepts the commandline arguments (an index to a Mersenne prime, and parameters for the SMB share). The taskname must be unique, and is constructed from a prefix, the index, and a random postfix. The commandlet New-AzureBatchTask submits the task and does not wait for its completion.

$firstindex..$lastindex  | ForEach-Object {
    $ps1file =  "%AZ_BATCH_APP_PACKAGE_MERSENNE#$($version)%generate_decimal_mersenne_and_upload.ps1"
    $taskCMD = "cmd /c `"powershell -executionpolicy bypass -File $ps1file -index $_ -uncpath $uncPath -account $shareAccount -sakey $StorageKey`""
    $taskName = "$tasknameprefix-$_-$taskPostfix"
    New-AzureBatchTask -JobId $jobname -BatchContext $batchkeys -CommandLine $taskCMD -Id $taskname -Constraints $constraints -ApplicationPackageReferences $appref
}

If you would dump $taskCMD, it might look like the following (credentials are randomized): cmd /c "powershell -executionpolicy bypass -File %AZ_BATCH_APP_PACKAGE_MERSENNE#1.0%generate_decimal_mersenne_and_upload.ps1 -index 20 -uncpath \sawalkthroughpkvydrcf.file.core.windows.netmersenneshare -account AZUREsawalkthroughpkvydrcf -sakey RCjwDatDd2TXgugAA74cfVUBqWROYvEYiEls0dKtXdD5zff4uOBW+SkHwonIG8iNJDG1kTf9anmKmrgbjBqWAZ=="

At this point, tasks should be running. To monitor for completion you could do something like the following. It gets the current state of all tasks in the job, summarises them, and if there are any unfinished tasks, sleep for three seconds. When done, terminate the job because there is no more work to do. This should not take long if you used the script defaults because it generates only the first 20 Mersenne primes, all of which are not very large.

do {
    $stats = Get-AzureBatchTask -BatchContext $batchkeys -JobId $jobname | Group-Object -NoElement state
    $stats | Format-Table
    $ready = ($stats.Values -notcontains "Active") -and ($stats.Values -notcontains "Running")
    if (-not $ready) { Start-Sleep -Seconds 3 }
} until ($ready)
Stop-AzureBatchJob -id $jobname -BatchContext $batchkeys

Inspect the results

With all the work done it's time to look at the Azure Portal and to retrieve the results. Let's start with the jobs and tasks. Go to the Resource group rg-batch-walkthrough, select the Batch Account, then Jobs (there should be just one initially), select this job, then select Tasks. The portal view should list the completed tasks, as follows.

Portal showing completed tasks

Pick any task you like, open it, select Files on Node. This will show you the list of files in the Working Directory of this particular task. This will always contain the stdout and stderr streams, which is very handy for debugging. It also shows the output file from the Mersenne calculation. You can download this if you like.

But since we made a point of saving the output to an SMB share, there are also other ways to get at the data. For instance, if you have access to the storage account using the SMB protocol (445/tcp) you can access it directly, for instance from an Azure VM. Accessing it from your home or company network is very likely to fail because few ISPs allow 445 to/from the Internet.

Alternatively, use Azure Storage Explorer, a tool to manage Azure Storage Accounts. You really should have a look at this if you are not familiar with it.

Suggested exercise:

  • generate a couple of really large primes, such as the indices 42 or 43. If you are feeling brave and are prepared to wait a couple of days, try the largest one: 49.

Detecting attempts to run untrusted code by using trusted executables in Azure Security Center

$
0
0

In February 2017, Fireye documented a sophisticated spear phishing campaign targeting individuals within the Mongolian government. In the initial part of this attack, they were bypassing AppLocker restrictions by using Regsrv32.exe, which enables the attacker to run untrusted code. This technique was used in many others attack campaignsBy using virtual machine behavioral analysis, Security Center can detect attempts to bypass AppLocker. When Security Center detects an attempt to run untrusted code by using trusted executables, it will trigger an alert similar to the one below.

While Security Center can help you to detect this attack, you can use EMET to mitigate it. Besides that, always remember to implement least privilege administrative model, and privilege access workstations.

 

 

 パートナー様による慈善活動 GCI の取り組み【1/13更新】

$
0
0

(この記事は2017年11月6日にMicrosoft Partner Network blog に掲載された記事 Partner Philanthropy Spotlight: GCI  の翻訳です。最新情報についてはリンク元のページをご参照ください。)

 

パートナー様の革新的なアイデア、クライアントや社会への取り組みは、私たちにとって常に大きな刺激となっています。そうした慈善活動をお伝えしている「パートナー様による慈善活動」シリーズの今回は、英国に拠点を置く GCI (英語) と、同社が展開する「Kids in Technology」プログラムについてご紹介します。GCI は業界をリードする IT サービス企業であり、英国各地にオフィスを構え、テクノロジ業界で働く女性の支援において大きな実績を残しています。同社は女性や子供たちがテクノロジの世界にもっと近づけるように支援する活動に力を入れており、その取り組みはパートナー様による地域貢献の 1 つの形を教えてくれるすばらしい事例だと言えます。


 

機会を生み出すことで子供たちの刺激に

GCI のマイクロソフト アライアンス ディレクター、Margaret Totten 氏は、長年 IT 業界で働く女性を応援するエバンジェリストとして活躍してきましたが、このプログラムを思い付いたのは意外にも息子さんと一緒にいたときだと話してくれました。修学旅行に同伴した Margaret 氏は、息子さんのクラスにいる女子生徒の多くが将来の自分自身のキャリアを描くのではなく、スポーツ選手との結婚を夢見て自らの可能性を制限してしまっていることに気付きました。「男の子は皆サッカー選手になりたいと言い、女の子は皆サッカー選手の奥さんになりたいと言っていた」のだそうです。

 

子供たちと話して落胆した Margaret 氏は、すぐにチーム メンバーの Kimberly Totten 氏と共に、テクノロジの世界を子供にとってもっと身近なものにするにはどうすればよいかを考え始めました。Kimberly 氏は、10 代以下の子供たちのカウンセリング経験が豊富で、特に経済的に貧しい地域の子供のサポートに強い関心を寄せていました。そこで 2 人は協力して互いの経験と知識を活かしながら、社会に貢献できるプログラムを立ち上げ、テクノロジ分野のキャリアに興味を持っている子供たちに対して、多くの機会が開かれていることを伝えようと思い立ったのです。

 

「クライアントのためにしていることを、子供たちにもしてあげたらどうかと考えたのです」

— GCI、マイクロソフト アライアンス ディレクター、Margaret Totten 氏

 

このプログラムを最初に思い付いたきっかけは、クライアントとのやり取りの中で、テクノロジを活用して、生産性を向上させ、大きな効果を生み出す方法を説明していたときのことです。2 人は Hour of Code プログラムの要素を取り入れて、テクノロジとコーディングに親しめる体験型の学習を考案しました。

英国では、コーディングは児童にとって重要な基本スキルだと見なされていませんが、Margaret 氏はテクノロジを民主化するうえで最適な方法だと考えています。

 

 

テクノロジが子供たちの可能性を広げる

第 1 回目の Kids in Technology イベントは、地元の Microsoft Technology Center で開催され、子供たちはクラウドとコーディングを学習しました。初めてのセッションは好評でしたが、その後のイベントではプログラムの教育的な側面に重点を置くことにしました。

たとえば 2 回目のイベントでは、子供たちをいくつかのグループに分け、それぞれに企業内の役職を志願させました。CEO や代表取締役といったひととおりの役職の責務を学んでから、独自のミニ企業を設立するという課題が与えられました。次に子供たちは、実際の GCI の見込み客と同じように、自分の会社にメリットをもたらす各種のテクノロジ製品やサービスについて説明を受けました。さらに次のステップとして、自分のミニ企業がクラウドに移行できる状態にあるかどうかを判断しました。

 

「子供たちに理解してもらいたいのは、テクノロジ業界のキャリアとは、デスクに向かってコンピューターで作業するだけではないということです。HoloLens などのテクノロジを利用して医療従事者を支援する方法や、ビッグ データや分析によって危機に瀕しているコミュニティを救う方法について考えることも重要な仕事なのです」

— GCI、マイクロソフト アライアンス ディレクター、Margaret Totten 氏

 

このプログラムの当初の目的は、国籍や経歴を問わず、あらゆる子供たちを支援することでしたが、程なくそれよりもはるかに大きな活動になりました。現在では、生徒がテクノロジを利用して実際に世界に影響をもたらすにはどうしたらよいか、その方法について考えています。

 

マイクロソフト パートナー コミュニティ (英語) では、パートナー様が社会に影響をもたらす方法について、皆様からのご意見を心よりお待ちしております。

 

 

Friday with International Community Update – Progress in each language (Dec. 2017)

$
0
0

Hello, Wiki Ninjas!
Today is Friday with International Community Update.

The end of December is as follows:

The topic of this month:

  • Anti-spam seem have worked well. The number of Russian Wiki and Chinese Wiki articles have decreased.

Look Back on 2017

I look back on progress of the last year.

The topic of this year:

  • Two new locales (Punjabi - India and Bengali - India) participated!!
  • 21 locales increased, 2 decreased.

Thank you!!

Tomoaki Yoshizawa (yottun8)
Blog: blog.yottun8.com
Facebook: Tomoaki Yoshizawa
twitter: @yottun8
TechNet Profile: Tomoaki Yoshizawa

Office 365 Planned Service Changes – January 2018 Updates

$
0
0

A post outlining the recent updates to the Office 365 Planned Service Changes series:

 

Office 365 Planned Service Changes for 2017 | Updated: January 13, 2018

 

Office 365 Planned Service Changes for 2018 | Updated: January 13, 2018

  • Added Yammer notes to Word Online conversion - action required by January 26, 2018
  • Added Removal of some existing APIs and cmdlets in Office 365 Reporting Service - January 29, 2018
  • Added New process for updating Yammer profiles - action required by February 15, 2018
  • Added Postponed SharePoint Online Public Website Deletion - March 31, 2018

 

Office 365 Planned Service Changes for 2020 | Updated: January 13, 2018

  • Added Focused Inbox and retirement of Clutter - January 31, 2020

 

Hopefully this information is helpful in keeping pace and managing change in Office 365.

 

Top Contributors Awards! January’2018 Week 2!!

$
0
0

Welcome back for another analysis of contributions to TechNet Wiki over the last week.

Happy New Year Microsoft TechNet Wiki Readers.

First up, the weekly leader board snapshot...

 

As always, here are the results of another weekly crawl over the updated articles feed.

 

Ninja Award Most Revisions Award
Who has made the most individual revisions

 

#1 Ken Cenerelli with 48 revisions.

 

#2 Kapil.Kumawat with 38 revisions.

 

#3 .paul. _ with 35 revisions.

 

Just behind the winners but also worth a mention are:

 

#4 Peter Geelen with 32 revisions.

 

#5 Somdip Dey - MSP Alumnus with 25 revisions.

 

#6 Arleta Wanat with 25 revisions.

 

#7 Burak Ugur with 24 revisions.

 

#8 RajeeshMenoth with 21 revisions.

 

#9 Cian Allner with 19 revisions.

 

#10 Carsten Siemens with 9 revisions.

 

 

Ninja Award Most Articles Updated Award
Who has updated the most articles

 

#1 Ken Cenerelli with 26 articles.

 

#2 .paul. _ with 19 articles.

 

#3 Kapil.Kumawat with 18 articles.

 

Just behind the winners but also worth a mention are:

 

#4 Somdip Dey - MSP Alumnus with 13 articles.

 

#5 Cian Allner with 12 articles.

 

#6 Burak Ugur with 8 articles.

 

#7 RajeeshMenoth with 8 articles.

 

#8 Peter Geelen with 7 articles.

 

#9 Carsten Siemens with 7 articles.

 

#10 Anthony Duguid with 2 articles.

 

 

Ninja Award Most Updated Article Award
Largest amount of updated content in a single article

 

The article to have the most change this week was Jak pozyskac grupy SharePoint Online niezaleznie od jezyka witryny? (pl-PL), by Arleta Wanat

This week's revisers were Arleta Wanat & Ken Cenerelli

 

 

Ninja Award Longest Article Award
Biggest article updated this week

 

This week's largest document to get some attention is Getting Started with Entity Framework Core: Building an ASP.NET Core Application with Web API and Code First Development, by Vincent Maverick Durano

This week's reviser was Carsten Siemens

 

 

Ninja Award Most Revised Article Award
Article with the most revisions in a week

 

This week's most fiddled with article is Office 365: Partial mailbox migration from an ISP to Exchange Online" without email interruption, by mb0339 - Marco. It was revised 9 times last week.

This week's revisers were Somdip Dey - MSP Alumnus, Arleta Wanat, Kapil.Kumawat, mb0339 - Marco, Peter Geelen & Burak Ugur

 

 

Ninja Award Most Popular Article Award
Collaboration is the name of the game!

 

The article to be updated by the most people this week is TechNet Guru Competitions - January 2018, by Peter Geelen

This week's revisers were João Eduardo Sousa, Arleta Wanat, Sabah Shariq, Somdip Dey - MSP Alumnus, Ryen Kia Zhi Tang, Tarh ik & .paul. _

 

 

Ninja Award Ninja Edit Award
A ninja needs lightning fast reactions!

 

Below is a list of this week's fastest ninja edits. That's an edit to an article after another person

 

Ninja Award Winner Summary
Let's celebrate our winners!

 

Below are a few statistics on this week's award winners.

Most Revisions Award Winner
The reviser is the winner of this category.

Ken Cenerelli

Ken Cenerelli has been interviewed on TechNet Wiki!

Ken Cenerelli has featured articles on TechNet Wiki!

Ken Cenerelli has won 51 previous Top Contributor Awards. Most recent five shown below:

Ken Cenerelli has TechNet Guru medals, for the following articles:

Ken Cenerelli's profile page

Most Articles Award Winner
The reviser is the winner of this category.

Ken Cenerelli

Ken Cenerelli is mentioned above.

Most Updated Article Award Winner
The author is the winner, as it is their article that has had the changes.

Arleta Wanat

Arleta Wanat has been interviewed on TechNet Wiki!

Arleta Wanat has won 32 previous Top Contributor Awards. Most recent five shown below:

Arleta Wanat has TechNet Guru medals, for the following articles:

Arleta Wanat has not yet had any featured articles (see below)

Arleta Wanat's profile page

Longest Article Award Winner
The author is the winner, as it is their article that is so long!

Vincent Maverick Durano

Vincent Maverick Durano has won 3 previous Top Contributor Awards:

Vincent Maverick Durano has TechNet Guru medals, for the following articles:

Vincent Maverick Durano has not yet had any interviews or featured articles (see below)

Vincent Maverick Durano's profile page

Most Revised Article Winner
The author is the winner, as it is their article that has ben changed the most

mb0339 - Marco

mb0339 has won 2 previous Top Contributor Awards:

mb0339 has not yet had any interviews, featured articles or TechNet Guru medals (see below)

mb0339's profile page

Most Popular Article Winner
The author is the winner, as it is their article that has had the most attention.

Peter Geelen

Peter Geelen has been interviewed on TechNet Wiki!

Peter Geelen has featured articles on TechNet Wiki!

Peter Geelen has won 194 previous Top Contributor Awards. Most recent five shown below:

Peter Geelen has TechNet Guru medals, for the following articles:

Peter Geelen's profile page

Ninja Edit Award Winner
The author is the reviser, for it is their hand that is quickest!

Kapil.Kumawat

Kapil.Kumawat has won 15 previous Top Contributor Awards. Most recent five shown below:

Kapil.Kumawat has not yet had any interviews, featured articles or TechNet Guru medals (see below)

Kapil.Kumawat's profile page

 

Another great week from all in our community! Thank you all for so much great literature for us to read this week!
Please keep reading and contributing!

 

Best regards,
— Ninja [Kamlesh Kumar]

 

Viewing all 36188 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>