Quantcast
Channel: TechNet Blogs
Viewing all 36188 articles
Browse latest View live

新しい Azure Load Balancer はスケーリング能力が 10 倍

$
0
0

執筆者: Yousef Khalidi (CVP, Azure Networking)

このポストは、2018 3 27 日に投稿された The new Azure Load Balancer 10x scale increase の翻訳です。

 

Azure Load Balancer は、TCP UDP の負荷分散を行う高スケーラビリティ、高スループット、低レイテンシのネットワーク ロード バランサーです。

このたび、Azure Load Balancer の新しい Standard SKU をリリースしました。スケーリング能力が 10 倍上がったこの SKU では、既存の Basic SKU よりも高度な診断をはじめとするさまざまな機能を使用できます。また、秒間数百万のフロー処理や、非常に高い負荷のスケーリングにも対応するよう設計されています。Standard および Basic SKU は共通の API を使用しており、複数のオプションから必要なものをお選びいただけます。

新しい Standard SKU の主な機能を以下にご紹介します。

 

大幅に向上したスケーリング能力

Standard SKU では、ネットワーク トラフィックをバックエンド プール内の最大 1,000 個の VM インスタンスに分散することが可能です。これは、既存の Basic SKU 10 倍となります1 つ以上の仮想マシン スケール セットを、単一の高可用性 IP アドレスで構成して、正常性プローブで各インスタンスの正常性と可用性の管理と監視を行うことができます。

VNet 内での汎用性

Standard SKU は、仮想ネットワーク (VNet) 全体で使用可能です。Basic SKU のように単一の可用性セットに制限されることはなく、VNet 内の任意の仮想マシンを構成してバックエンド プールに追加することができます。バックエンド プール内にある複数のスケール セット、可用性セット、個々の仮想マシンを組み合わせることも可能です。

高速プロビジョニング

Standard SKU では、構成を数秒で変更できる最新の制御プレーンを採用しています。このため API フロントエンドの応答性が高く、更新や突然の変更にも迅速に対応します。

IP アドレスの制御性と柔軟性

フロントエンドで静的パブリック IP アドレスを完全に制御可能なため、IP アドレスがハードコーディングされていることが多い従来のネットワーク ファイアウォールとも組み合わせることができます。また、再デプロイやアップグレードの際にも、ロード バランサー間で静的パブリック IP アドレスを移動して一貫性や安定性を維持することができます。

送信の接続性の向上

Basic Standard のどちらの SKU でも、複数のフロントエンド IP アドレスを使用することができます。Standard SKU ではこの機能を拡張し、任意またはすべての IP アドレスを送信トラフィックで使用できるようになりました。これにより、フロントエンドを追加することで送信の接続数を増やすことができます。

回復性と可用性ゾーンのサポート

Standard SKU には Azure 可用性ゾーン (AZ) を使用するための機能が搭載されています。パブリックまたは内部のフロントエンドで単一の IP アドレスを使用するか、フロントエンドの IP アドレスと特定のゾーンを関連付けると、ゾーンを冗長化することができます。このような複数ゾーンでの負荷分散には、リージョン内のあらゆる VM または VM スケール セットを使用することができます。リージョン内のデータ経路はエニーキャストであるため、ゾーン冗長 IP アドレスへのトラフィックはすべてのゾーンでロード バランサーから送信 (アドバタイズ) されます。仮にゾーンのサービスが完全に停止した場合でも、他のゾーンのインスタンスからのトラフィックを迅速に送信します。詳細については、可用性ゾーンと Standard Load Balancer のドキュメントを参照してください。

高可用性ポート

「ファイアウォールや他のネットワーク プロキシなどのネットワーク仮想アプライアンスで、アクティブ/アクティブ セットアップと n+1 冗長を構成したい」と、以前よりお客様からご要望いただいていました。フロー単位で高可用性 (HA) ポートを有効化すると、内部 Standard Load Balancer のフロントエンドのポートすべてに負荷を分散することができます。これにより、簡単に HA 構成をセットアップできるほか、負荷分散ルールを個別にいくつも設定する必要がなくなります。詳細については、高可用性ポートのドキュメントを参照してください。

新しいインサイトと診断機能

新しい Standard Load Balancer では、テレメトリや自動インバンド正常性測定に加え、トラフィック量、受信接続の試行、送信接続の正常性、Azure プラットフォームの正常性に関するインサイトを利用できるため、デプロイ全体の制御性やネットワークの可視性を向上したいお客様に最適です。パブリック フロントエンドを構成すると、Azure が即座にインバンドのアクティブ測定を開始し、ネットワークに関する最新のインサイトに基づいて、リージョン内のお客様のエンドポイントの正常性を判断します。すべての情報は多次元メトリックの集合として Azure Monitor に表示され、Azure Operations Management Suite などで利用されます。詳細については、診断機能と監視機能の向上に関するドキュメント (英語) を参照してください。

既定の安全性

新しい SKU では、セキュリティ ポスチャも変更および調整されています。IP アドレスと負荷分散先のエンドポイントは既定で閉じられています。通信を許可する場合には、バックエンドの VM または VM が存在するサブネットに関連付けられたネットワーク セキュリティ グループ (NSG) を使用して、特定のポートを開く必要があります。

Azure Standard Load Balancer は、現在 27 のパブリック クラウド リージョンでご利用いただけます。詳細については、Load Balancer のドキュメントを参照してください。

 


Office 365 テナント上の複数のYammer ネットワークについて

$
0
0

こんにちは、SharePoint サポート チームの関 友香です。

 

先日 (2018 3 31 日時点) Office 365 のメッセージ センターにて、Office 365 テナント上に存在する複数の Yammer ネットワークの統合の計画がアナウンスされました (MC133153)。これまでにも多数のお客様からOffice 365 テナントに多数存在している Yammer ネットワークについてご質問をお寄せいただいておりますので、本ブログにてご利用中のテナントの Yammer の確認方法、複数の Yammer ネットワークが存在している背景、Yammer ネットワーク統合に向けた対応方法などをご紹介させていただきます。

<目次>
1.
既存の Yammer ネットワークの確認
2. 複数のYammer ネットワークが作成された背景
3. Yammer ネットワークの統合
4. Yammer ネットワークの管理


1. 既存の Yammer ネットワークの確認

Office 365 テナントに多数のカスタム ドメインを登録している環境では、複数の Yammer ネットワークが作成されている可能性がございます。まずは、Yammer ネットワークの管理メニューから現在の Yammer ネットワークの構成状況をご確認下さい。

ネットワークの構成確認方法

1) Office 365 の全体管理者にて、Yammer ネットワーク (. https://www.yammer.com/contoso.onmicrosoft.com) にログインします。
2) 画面左上の [歯車ボタン] - [ネットワーク管理者] をクリックします。
3) [
ネットワーク] メニューから [ネットワーク移行] をクリックします。
4) [
手順 1/3 - 検証済みドメインのチェック/追加] 画面の以下の情報を確認します。

*下記の赤枠部分に Office 365 上のカスタム ドメインが多数存在している場合、お客様の環境では、テナント上に複数の Yammer ネットワークが存在している構成であることが確認できます。

これらの Yammer ネットワークは Office 365 テナントに紐づいており、この複数の Yammer ネットワークが上記アナウンスの対象となります。個々の Yammer ネットワークの管理方法をご案内する前に、このような構成となった背景を以下にご説明します。

 

2. 複数のYammer ネットワークが作成された背景

Yammer は元々 Office 365 とは別のサービスであったため、利用者の保持しているドメインをもとに Yammer 単体のネットワークを作成することが可能でした。上図の contoso.com などの Yammer ネットワークは、過去に無償版の Yammer ネットワークとして作成されたといえます。

これらの無償版Yammer ネットワークは、2016 年以降に展開された機能変更により、Office 365 テナントに所属することになり、Office 365 テナント上のカスタム ドメインに Yammer ネットワークが作成されている場合には対象の Yammer ネットワークを Office 365 テナントに紐づける動作となりました。(Yammer ネットワークがテナントに紐づくタイミングで、無償版からエンタープライズ版に自動的にアップグレードされます。) その結果、これまではテナントとは独立して存在していた無償版の Yammer ネットワーク (. contoso.com) がテナントに追加され、テナント作成と同時に作成される contoso.onmicrosoft.com Yammer ネットワークと併せて、複数の Yammer ネットワークが存在する環境が構成されるようになりました。

なお、カスタム ドメインに Yammer ネットワークが作成されていない場合には、Office 365 の既定のドメインに紐づいている Yammer ネットワーク (. contoso.onmicrosoft.com) にドメインが統合されます。カスタム ドメインを登録することで新しい Yammer ネットワークがテナントに作成される動作とはなりません。

 

3. Yammer ネットワークの統合

このように複数の Yammer ネットワークが存在している環境では、Office 365 のユーザーは、テナントに紐づくいずれか一つの Yammer ネットワークのみにアクセスすることが可能であり、各 Yammer ネットワークはプライベートなソーシャル空間となっています。複数の Yammer ネットワーク環境では、管理者の意図に反して、ユーザーが想定外の Yammer ネットワークにアクセスしてしまう等のお問い合わせも多数およせいただいており、Yammer ネットワークやユーザーの管理が煩雑になりがちです。これらの多数存在する Yammer ネットワークを一つにまとめる作業が今回通知 (MC133153) されておりますネットワーク統合です。ネットワーク統合は Office 365 全体管理者でも実施することが可能ですので、管理者にて行うことができる統合作業の動作をもとに、統合時の注意点をいくつかご案内します。より詳細な情報はこちらの参考資料に記載がございますので、併せてご確認ください。

 

注意点

・ 統合元から統合先の Yammer ネットワークには、ユーザーのみ (ユーザーの名前やプロファイル情報など) が移行され、統合元の Yammer ネットワーク上のグループを含むコンテンツは移行されず削除されます。 外部グループも同様に削除されます。

・ 統合した Yammer ネットワークはもとに戻すことはできません。移行後はネットワークとコンテンツにアクセスすることはできません。

・ 移行元の Yammer ネットワークに外部ネットワークがある場合は、移行先の Yammer ネットワークに紐づき、外部ネットワーク参加者もそのまま維持されます。

 

4. Yammer ネットワークの管理

上述しましたように Yammer ネットワークの統合を行うことで、統合元のネットワークのコンテンツが削除されるなどの影響がございますので、事前に Yammer ネットワークの利用状況をご確認いただき、必要に応じてネットワーク上のコンテンツをエクスポートいただくことをおすすめします。以下に手順をご案内します。


Yammer ネットワークのユーザー数やメッセージ数を確認する

Office 365 全体管理者にてログイン可能な Yammer ネットワーク上からテナントに紐づく各Yammer ネットワークの利用状況 (ユーザー数、メッセージ数、および作成されている外部ネットワーク情報) を確認することが可能です。

Yammer ネットワークの利用状況の確認手順

1) Office 365 の全体管理者にて、Yammer ネットワークにログインします。
2)
画面左上の [歯車ボタン] - [ネットワーク管理者] をクリックします。
3) [
ネットワーク] メニューから [ネットワーク移行] をクリックします。
4)
 [次へ] をクリックします。
5) [
ステップ 2/3 - 移行する Yammer ネットワークを選ぶ] 画面にて、テナント上に存在する Yammer ネットワークの利用状況 (ユーザー数、メッセージ数、作成されている外部ネットワークの情報) が表示されます。

 

Yammer ネットワーク上のコンテンツやユーザー情報を確認する

Yammer ネットワークに認証管理者としてログインすることで、ネットワーク上のコンテンツを CSV ファイルにエクスポートすることが可能です。もともと無償版として作成された Yammer ネットワークの場合、管理者が存在しないと考えられますので、以下の方法 “Yammer 認証管理者の追加手順” を実施した上、”Yammer ネットワークのデータエクスポート手順” をお試しください。

 

Yammer 認証管理者の追加手順
1) Office 365 テナントに、以下の条件を満たすアカウントを用意します。

・ Yammer ネットワークのドメインのメールアドレスを持つユーザー (例. admin@contoso.com)

・ Office 365 テナントの全体管理者

2) 作成した管理者アカウントで Office 365 にログインします。
3) Office 365 アプリランチャーの Yammer タイルをクリックします。
4) 管理対象の Yammer ネットワークにアクセスします。アクセスしている URL のアドレスが https://www.yammer.com/<ネットワークのドメイン> であることを確認してください。
5) 画面左上の [歯車ボタン] をクリックし、下図のように [ネットワーク管理者] メニューが表示されることを確認します。

 

 

Yammer ネットワークのデータエクスポート手順

データ エクスポートの機能を利用することで Yammer ネットワーク上のメッセージ、ユーザー、トピック、グループの一覧が csv 形式でダウンロード可能です。ダウンロードしたデータをインポートする機能はございませんので、予めご了承ください。

1) 認証管理者権限を持つユーザーで Yammer にログインします。
2) 画面左上の [歯車ボタン] – [ネットワーク管理者] をクリックします。
3) [データをエクスポート] をクリックします。
4) 必要情報を入力し、[エクスポート] をクリックします。

- 補足 -
エクスポートされた csv 形式のファイルを Excel で開く場合、文字コードの影響で文字化けが発生しますので、以下のどちらかの方法をお試しください。

A. CSV ファイルをメモ帳で開き、上書き保存したファイルをダブルクリックで Excel で開く
B. Excel のデータインポート機能を使用し、 UTF-8 を指定してインポートする

 

[参考情報]

タイトル : Office 365 で Yammer のドメインのライフサイクル全体を管理する

アドレス : https://support.office.com/ja-jp/article/9d9f9ee8-7c56-4f50-81f3-aa0a8d761e14

タイトル : ネットワークの移行: 複数の Yammer ネットワークを統合する

アドレス : https://support.office.com/ja-jp/article/a22c1b20-9231-4ce2-a916-392b1056d002

 

今回の投稿は以上になります。

 

本情報の内容は、作成日時点でのものであり、予告なく変更される場合があります。

Nestartuje vám OS? V Azure díky sériové konzoli žádný problém.

$
0
0

U lokálního hypervisoru se dostanete do VM i v okamžiku, kdy je s ní nějaký problém – špatně nastavená síť, moc restriktivní firewall, poškozené nastavení mountování disků apod. Ale ve veřejném cloudu je váš přístup do VM postaven na IP komunikaci s SSH nebo RDP, maximálně se můžete podívat na bootovací sekvenci, ale nic neopravíte. Azure je ale jiný – nově přichází s plnohodnotnou sériovou konzolí. Pojďme si ji vyzkoušet.

Proč se sériová konzole hodí

Velký okruh potenciálních problémů se týká síťové komunikace. Stačí omylem zakázat síťovou kartu a máte po přístupu. Nebo možná špatně nastavit IP či zakázat si SSH či RDP na vnitřním firewallu. Další situace se týká spuštěných služeb – například v systemd v Linuxu omylem zakážete spuštění sshd po startu nebo ve Windows zakážete RDP. Může také jít o problém s disky a souborovým systémem. Někdy je při podezření na poškození FS při startu systému nutné něco odklepnout – a v ten okamžik ještě vzdálený přístup nefunguje. Nebo při nastavení mountování v Linuxové fstab spácháte překlep a OS se při startu zastaví.

To jsou všechno situace, které je v cloudu obtížné řešit. Často to vede na nutnost si vyexportovat image, ten napojit do jiné VM a v souborovém systému věci opravovat. Nic moc. Azure ale právě přichází se sériovým přístupem, skutečným řešením. A mimochodem v době psaní článku tohle AWS nemá 🙂

Sériová konzole s Linuxem

Při vytváření VM (nebo klidně později) zapneme Boot Diagnostics. To dřív sloužilo pouze pro jednosměrný sběr logů ze sériové konzole, ale nově umožňuje právě interaktivní přístup.

Připojme se na sériovou konzoli a uvidíme server bootovat. Interaktivní okno je postavené na stejné technologii jako velmi oblíbený Azure Cloud Shell. Můžeme tak například vidět Grub, skočit do něj a nabootovat si do Emergency Mode, což podle konkrétního nastavení OS můžeme posloužit k recovery hesla.

Nebo sledovat start OS a interagovat s ním.

Tady jsem nabootoval do emergency módu.

Díky tomu můžu změnit zapomenuté heslo.

Pokračovat ve čtení

ロードバランサーの診断ログが出力されない場合のチェックポイントについて

$
0
0

こんにちは。Azure サポートの宇田です。
今回はロードバランサー (Basic) の診断ログが出力されない場合のチェックポイントについてご紹介します。

Azure におけるロードバランサーの挙動については、以下の投稿もあわせてご確認ください。

ロードバランサーの診断ログについて


ロードバランサーでも、他の Azure リソースと同様に診断ログを採取する機能が提供されています。

詳しくは以下のドキュメントをご覧いただければと思いますが、ロードバランサーでポート枯渇が発生した場合に記録される「アラート イベント ログ」と、プローブに状態に変化があった際に記録される「正常性プローブ ログ」の二点が用意されています。

診断ログが出力されない主な要因について


「ロードバランサーの診断ログが正常に出力されない」とサポートにお問い合わせいただく際の、主な要因は以下の通りです。

  1. 内部ロードバランサーを使用している
  2. ログを記録するようなイベントが発生していない
  3. Microsoft.Insights のリソース プロバイダーが登録されていない

それぞれについて、以下に順番にご説明します。

内部ロードバランサーを使用している場合

先のドキュメントにも記載の通り、診断ログについては外部ロードバランサー のみ対応となっております。

恐れ入りますが、内部ロードバランサーの場合では診断ログをサポートしていないため、ご利用いただくことができません。

ログ分析は現在、インターネットに接続するロード バランサーに対してのみ機能します。 ログは、リソース マネージャーのデプロイ モデルでデプロイされたリソースについてのみ使用できます。 クラシック デプロイメント モデルのリソースには使用できません。

ログを記録するようなイベントが発生していない場合

診断ログが出力されるのは、該当するイベントが発生した場合のみとなります。

「アラート イベント ログ」であれば、実際にポート枯渇が発生したなどの場合に、「正常性プローブ ログ」であれば、バックエンドの VM がダウンして 2 台中 1 台のみ正常な状態になったり、復旧して 2 台中 2 台が正常に戻った際などに記録されます。このため、状態に変化が無い場合ですと診断ログが出力されませんので、必要に応じて VM を一度停止や再起動などをお試しください。

また、実際にログが出力されるまでに少々時間を要する場合もございますので、30 分ほど時間をあけていただいてから、再度ログが出力されたかをご確認いただければと思います。

Microsoft.Insights のリソース プロバイダーが登録されていない場合

診断ログの機能については、Microsoft.Insights というリソース プロバイダーによって提供されています。このため、当該リソース プロバイダーが未登録の状態ですと、ログが出力されないことがございます。

以下、仮想マシンの診断ログが表示されない場合について記載したポストにも記載の通り、リソース プロバイダーが未登録の場合は、手動で明示的に登録をお願いいたします。

 

以上、ご参考になれば幸いです。

本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

Microsoft Office 365 в образовании. Организация интерактивного обучения. Онлайн-дискуссия в Office 365

$
0
0

Автор статьи - Виталий Веденев.

Итоги исследовательской работы [1] можно обсудить в ходе онлайн-дискуссии.  Продолжаю рассматривать применение интерактивных методов в процессе обучения средствами Office 365 [1-5].

Что вы будете знать и уметь после прочтения этой статьи?

- Как организовать онлайн-дискуссию средствами интегрированных сервисов и приложений Microsoft Office 365?

Учебные дискуссии представляют собой такую форму познавательной деятельности обучающихся, в которой субъекты образовательного процесса упорядоченно и целенаправленно обмениваются своими мнениями, идеями, суждениями по обсуждаемой учебной проблеме.

Во время дискуссии формируются следующие компетенции: коммуникативные (умения общаться, формулировать и задавать вопросы, отстаивать свою точку зрения, уважение и принятие собеседника и др.), способности к анализу и синтезу, брать на себя ответственность, выявлять проблемы и решать их, умения отстаивать свою точку зрения, т.е. навыки социального общения и др.

Для проведения групповой дискуссии все обучаемые, участвующие в исследовании [1], разбиваются на небольшие подгруппы, которые обсуждают те или иные вопросы, входящие в тему занятия. Обсуждение может организовываться двояко: либо все подгруппы анализируют один и тот же вопрос, либо какая-то крупная тема разбивается на отдельные задания. Традиционные материальные результаты обсуждения таковы: составление списка интересных мыслей, выступление одного или двух членов подгрупп с докладами, составление методических разработок или инструкций, составление плана действий.

Сценарий 1. Организация дискуссии в коллективном чате Microsoft Teams

Образовательной дискуссией называется целенаправленное, коллективное обсуждение конкретной проблемы (ситуации), сопровождающееся обменом идеями, опытом, суждениями, мнениями в составе группы.

Группа может представлять собой команду Microsoft Teams по аналогии с организацией тематического семинара [сценарий 3, 6]:

Как организовать проведение дискуссии в Microsoft Teams?

  1. Организуем в Microsoft Teams, например, канал «Дискуссия» в группе обучаемых («Команды»).
  2. В канале заранее размещаем вопросы для обсуждения по итогам исследовательской работы [1] и т.п., по которым будет проводиться дискуссия.
  3. Планируем собрание в группе с помощью «Собрания» или другим удобным способом, например, коллективный чат.
  4. Обсуждение можно организовать после вступительного доклада, вопросы и свои суждения можно задавать и высказывать в видео-чате в ходе доклада и далее при свободной дискуссии, соблюдая очередность выступлений с соблюдением сетевого этикета, демонстрировать вспомогательные материалы.
  5. Формируем текстовый документ с основными выводами по итогам дискуссии с целью последующего ознакомления с итогами группой и возможностью ознакомления всех участников группы с вопросами, и ответами в ходе обсуждения.

 

Сценарий 2. Организация дискуссии в Skype и Skype для бизнеса

Дискуссия является разновидностью спора, близкой к полемике, и представляет собой серию утверждений, по очереди высказываемых участниками. Заявления последних должны относиться к одному и тому же предмету или теме, что сообщает обсуждению необходимую связность. Дискуссии-онлайн можно проводить как в Skype, так и в Skype для бизнеса.

Skype для бизнеса — это единая платформа корпоративных коммуникаций [7], позволяющая участникам образовательного процесса делать видео-звонки как друг другу, так и внешним контактам, видеть присутствие в сети всех подключенных контактов, отправлять мгновенные сообщения, вести совместное обсуждение документов и проводить аудио, видео или веб-конференции для достаточно большого количества участников, демонстрировать любой сложности контент.

В ближайшей перспективе возможности Skype для бизнеса будут доступны непосредственно из Microsoft Teams [7].

Skype – популярный сервис голосового и видео-общения между небольшим количеством участников, бесплатный клиент загружается из Интернета. Skype постепенно интегрируется с новыми технологиями Microsoft.

Skype, в том числе, рассчитан для проведения в учебных целях [8]:

- Skype-уроков.

- Mystery Skype (глобальной игры).

- Виртуальных путешествий.

- Для приглашенных лекторов.

В настоящее время для небольшой аудитории [9] можно использовать одновременно Skype и Skype для бизнеса для организации дискуссий. Основная нагрузка в ходе дискуссии — это голосовые сообщения с видеоизображением и текстовый чат:

Более подробно о совместном использовании Skype и Skype для бизнеса можно ознакомиться в статье «Microsoft Office 365 в образовании. Коммуникации. Skype и Skype для бизнеса».

Выводы по организации онлайн-дискуссий:

1.Несмотря на то, что вы можете находиться в любом удобном месте и использовать любое устройство, общение происходит с реальными людьми, удаленными от вас, поэтому:

2.Необходимо соблюдать все правила общения с участниками дискуссии и одновременно – сетевой этикет (соблюдать очередность высказываний по теме дискуссии и т.д.).

3.Не надо отклоняться от темы дискуссии и вынесенных на обсуждение вопросов.

Использованные источники:

  1. Microsoft Office 365 в образовании. Организация интерактивного обучения. Исследовательский метод https://blogs.technet.microsoft.com/tasush/2018/03/30/organizacija-interaktivnogo-obuchenija-issledovatelskij-metod/
  2. Microsoft Office 365 в образовании. Организация интерактивного обучения средствами Office 365. Обзор https://vedenev.livejournal.com/25227.html
  3. Microsoft Office 365 в образовании. Организация интерактивного обучения в Microsoft Teams: коллоквиум https://vedenev.livejournal.com/25489.html
  4. Microsoft Office 365 в образовании. Организация интерактивного обучения в Microsoft Teams. Кейс-метод https://blogs.technet.microsoft.com/tasush/2018/01/26/organizacija-interaktivnogo-obuchenija-kejs-metod/
  5. Microsoft Office 365 в образовании. Организация интерактивного обучения. Мозговой штурм https://blogs.technet.microsoft.com/tasush/2018/02/16/organizacija-interaktivnogo-obuchenija-mozgovoj-shturm/
  6. Microsoft Office 365 в образовании. Технология проведения учебных занятий в чате Microsoft Teams. Примеры https://blogs.technet.microsoft.com/tasush/2017/04/14/tehnologija-provedenija-uchebnyh-zanjatij-v-chate-microsoft-teams-primery/
  7. Microsoft Office 365 в образовании. Современное обучение и Microsoft Teams. Новые тенденции https://vedenev.livejournal.com/25012.html
  8. Обучение с ИТ https://education.microsoft.com/Learning/LearningPrograms/Detail/315
  9. Microsoft Office 365 в образовании. Коммуникации. Skype и Skype для бизнеса https://vedenev.livejournal.com/28526.html

What’s new in Windows 10, version ####

Instalando e configurando o Cloud Distribution Point – Parte 1 – Instalação

$
0
0

Nesse Post vamos mostrar como realizar a instalação e configuração do Cloud Distribution Point.

No post anterior, já realizamos toda a criação e configuração de certificados. Agora vamos iniciar a configuração do Cloud Distribution Point na console do System Center Configuration Manager. Em Administration, vá até Cloud Services, clique com o direito em Cloud Distribution Points e logo após Create Cloud Distribution Point conforme abaixo.

01 - CloudDP_Config

 

Na janela abaixo digite a sua Subscription ID da conta do Azure e clique em Browse para selecionar o Certificado de Gerenciamento da sua conta no Azure.

02 - CloudDP_Config

 

Agora clique em browse e selecione o certificado criado para o serviço de Cloud Distribution Point. Note que, só após selecionar o certificado, os Campos Service FQDN e Service Name serão preenchidos.

Escolha a Region e o Primary Site que o Cloud Distribution Point irá responder e clique em Next.

 

03 - CloudDP_Config

 

Selecione os valores para alerta de transferência de dados e clique em Next.

04 - CloudDP_Config

 

No Summary, clique em Next para que o System Center Configuration Manager execute a configuração.

05 - CloudDP_Config

 

Agora clique em Close.

06 - CloudDP_Config

 

Finalizando todo o processo com sucesso, agora podemos acompanhar o provisionamento do serviço no Azure pela console e também através do log CloudMGR.log.

Outro log importante é o CloudDP-<guid>.log para acompanhar a saúde do serviço, informação sobre armazenamento e acompanhar a conexão com os clientes que buscam conteúdo no Cloud Distribution Point.

07 - CloudDP_Config

 

Agora para realizar um teste, faça a distribuição de algum conteúdo para o Cloud Distribution Point e acompanhe nos logs Distmgr.log, CloudDP-<guid>.log como também pela console.

08 - CloudDP_Config

 

Finalizando, para que os Clientes do System Center Configuration Manager consigam acessar o conteúdo no Cloud Distribution Point, é necessário criar um CNAME no DNS com o Alias Name, que é o Common Name do serviço, e o Fully Qualified Domain Name, que é o Service Name que anotamos nos passos anteriores no seguinte formato abaixo, ServiceName.cloudapp.net.

Note que é crucial o cliente ter acesso ao CNAME para que consiga localizar conteúdo com sucesso.

01 Capture

Finalizamos então toda a configuração necessária para o Cloud Distribution Point e o mesmo já está pronto para ser utilizado.

 


Conteúdo criado e publicado por:
Jeovan M Barbosa
Microsoft PFE
Configuration Manager


Run your Python script on demand with Azure Container Instances and Azure Logic Apps

$
0
0

By Basim Majeed, Cloud Solution Architect at Microsoft

An increasing focus has been placed recently on the data science process; a methodology to govern the enterprise-scale effort that goes into the development, deployment and maintenance of data analytics. Data scientists have not been lacking in terms of tools for developing their algorithms but when it comes to deploying their solutions, especially in a hybrid environment, the available tools have not been flexible enough. Data scientists love to write their Python and R scripts using open source tools, such as PyCharm and RStudio. Such tools allow them to work interactively with samples of the data and build the analytics algorithm gradually.

Most of the developed data analytics scripts are required to be deployed in a complex environment that could span a hybrid cloud/on premises resources. Having the right tools to allow the governance and deployment of these scripts within a workflow that meets business needs is a major part of making a success out of the data science effort.

Microsoft Azure offers several services that allow flexibility of deployment and integration of data analytics, and allow the algorithms developed by data scientist to be made available in a number of ways based on the specific usage scenario. In this article I will show how to host a Python script in Azure Container Instances and how to then integrate the container in a workflow using Azure Logic Apps. Using Logic Apps makes it easy to ensure the container is only created on demand and then turned off so that the cost is only incurred when necessary.

 

Architecture

This scenario requires the Python script to run on demand based on a trigger event (e.g. when new data becomes available). The script retrieves data from an Azure SQL database, operates on the data and then writes the results back to the database as shown in the diagram below. A Docker container image hosts the Python script and is registered with the Azure Container Registry. The Logic Apps instance controls the workflow and is instantiated by the trigger signal, creating a container group with a single container based on the image stored in the registry. The container runs the Python script and on completion it is destroyed by the Logic App.

 

Implementation

The first step is to use Docker to build a container image that can run the Python script. Since it is necessary for the script to interact with the SQL database, we need to make sure that the Dockerfile used to build the container image contains the necessary reference to the pyodbc library.  A complete Dockerfile can be found here, though you will need to add the necessary command to include your Python script as part of the Dockerfile. For example, to include the Python script “my_script.py” you will need to add the following (note: modify the following two lines based on where you want to place the scripts in the container image):

ADD my_script.py /

CMD [ "python", "./my_script.py" ]

After you have created your Dockerfile you can use the Docker Command Line Interface (CLI) “build” command to build your container image:

docker build -t <dockerfilename> .

The second step involves registering and uploading the container image with the Azure Container Registry, making sure to tag the image with information such as the image version. You can use the Azure CLI to achieve this as described here.

The third step is about building the workflow using Azure Logic Apps. With the recent addition of Container Instance Group connectors, Logic Apps can control the creation of a Container Instances inside container groups, monitor the container state to detect success of execution and then delete the container and the associated container group. By ensuring that the container is only active for the amount of time necessary to complete the task, charges are minimised.

There are many trigger types that can be used to start the Logic App including webhooks, http notifications and timed events, allowing the workflow to integrate the Python script execution with external events. In the Logic App instance shown in the diagram below the trigger is set as a timed event. When the Logic App receives the timer event it creates a Container Group and a Container inside the group based on the image retrieved from the registry. A loop is then started that monitors the state of the Container Group until it has succeeded (indicating that the Python script has completed). The last step is to delete the Container Group.


Configuring MIM 2016 Full and Incremental Imports for SharePoint

$
0
0

Recently I was assigned to help get MIM 2016 working with SharePoint 2016. The initial MIM Install was pretty straightforward. I am going to assume you have installed MIM and have run the PowerShell for Full and Delta imports using SharePointSync.psm1 and that they are working correctly. I am also going to assume you have SP1 and at least KB4021562 for FIM.

For some reason I struggled with getting the Delta and Full imports scheduled in Task Scheduler, so I am writing this blog for my reference in the future.

First, we need to create a couple of PowerShell Scripts (one for the full import and one for the Delta Import).

The scripts should look like below. Save them to a directory (because I was on a test system I only had a C drive, so I saved to c:ScheduledScripts

Delta.ps1 Import-Module  C:SharePointSynchronizationSharePointSync.psm1

Start-SharePointSync -Delta -Confirm:$false

 

Full.ps1 Import-Module  C:SharePointSynchronizationSharePointSynch.psm1

Start-SharePointSync -Confirm:$false

 

 

Now load task Scheduler and create task. Enter Name (MIM Delta Import). Click Change user or Group account for running task to a service account that is not part of MIM or SharePoint (Plan for MIM Security) Click Run whether user is logged in or not. Click to check Run with Highest Privileges. This account will need to be assigned the user right Logon as Batch in secpol.msc. Note: This will as for and save the credentials for the service account that you use for running this task.  If your security posture will not allow you to save credentials you will only have two options. You can run only when the service account is logged in or you can configure this to run as NT AuthoritySystem.  Both of these have drawbacks and should be thoroughly evaluated.

Select Triggers Tab. Select New. Select appropriate days/times for schedule. Click OK.

Select Actions tab. Select New. For Program/Script: input box type PowerShell. For Add Arguments (optional) input box type -noprofile -executionpolicy bypass -file C:ScheduledScriptsdelta.ps1 . Click OK.

 

Click OK. Screen should look like below:

Click OK to save. Enter the password of the service account you used if prompted.

Repeat steps above for full import.

Now off to the right click RUN to test the script.

First Error I received was for the PSM1 file:

Security warning Run only scripts that you trust.

While scripts from the internet can be useful, this script can potentially harm your computer. If you trust this script, use the Unblock-File cmdlet to allow the script to run without this warning message.

To fix open PowerShell as admin and run Unblock-file c:SharePointSync.psm1

Click to Run again.

This time import fails with no information. Looking in Event Viewer I found Event Viewer Error 10016.

Opening up Regedit and searching for the GUID {835BEE60-8731-4159-8BFF-941301D76D05} I see it belongs to ForeFront Identity Management Syncronization.

Opening up component Services and then DCOM Config. I see that the only accounts/groups listed with Launch and Activation Permissions is the local groups created by MIM.

Open up computer Management and add you service account to the local group MIMSyncAdmins. See referenced  (Plan for MIM Security)

 

Closer to the Edge

$
0
0

Logbucheintrag: 180404


Mit dem Beginn des vierten Quartals im Microsoft-Geschäftsjahr vollenden wir unsere Mission „Intelligent Cloud, Intelligent Edge“. Unser Ziel ist es, Cloud Computing noch näher an den Anwender heranzubringen – egal, ob es sich bei dem Anwender um einen Menschen oder um eine Maschine handelt. Dabei zielen wir auf alle Bereiche unseres gesellschaftlichen Zusammenlebens – zuhause und in der Freizeit, bei der Arbeit und unterwegs. Wir bringen Cloud-Services dorthin, wo sie eingesetzt werden sollen.

Aus diesem Megatrend ergeben sich zwei weitere Paradigmen, auf die sich Microsoft zusammen mit Partnern und Kunden einstellen will:

Erstens – die Möglichkeiten der künstlichen Intelligenz werden immer vielfältiger. Je mehr Daten und das Wissen dieser Welt genutzt und verarbeitet werden können, desto mehr erweitern KI-Funktionen unsere Fähigkeit, Sachverhalte wahrzunehmen und Zusammenhänge zu erkennen. Das wird unsere Arbeitswelt ebenso verändern wie unsere Erlebniswelt.

Zweitens – physische und virtuelle Welten ergänzen einander zu noch tiefer greifenden Erfahrungswelten. Die Menschen, mit denen wir kommunizieren, die Dinge, die uns umgeben, die Orte, an denen wir uns aufhalten, und die Aktivitäten, die wir unternehmen, können wir intensiver erfassen und besser verstehen.

Das sind die Paradigmen, auf die sich Microsoft mit der Bündelung der Entwicklungs- und Produkt-Kapazitäten einstellen wird. Dazu haben wir unsere Aktivitäten in zwei völlig neuen Gruppen neu aufgestellt.

Da ist zum einen das Team unter dem Namen „Experiences & Devices“, das sich auf die Entwicklung eines einheitlichen Produktverständnisses rund um unsere Enduser-Produkte konzentrieren wird. Das ist äußerst anspruchsvoll. Denn mit Hilfe von künstlicher Intelligenz und Edge Computing werden wir nicht mehr nur über Bildschirm und Tastatur arbeiten, sondern alle unsere Sinne ansprechen und nutzen. Und wir werden unterschiedlichste Endgeräte nutzen – Smartphones, Roboter, Fahrzeuge und vieles andere mehr, das über Computerpower und künstliche Intelligenz verfügt.

Und da ist zum zweiten das Team „Cloud & AI Platform“, das die Infrastruktur für diese Endgeräte weiterentwickeln wird. Hier werden wir nicht nur die Intelligent Cloud und Intelligent Edge vorantreiben, sondern sie zugleich mit künstlicher Intelligenz in allen seinen Varianten rund um Wahrnehmung, Wissen und Wertschöpfung ausstatten.

Ein zentraler Baustein dabei ist die Azure-Plattform, die damit neben Windows tragende Säule unseres Solution-Stacks ist. Tatsächlich ist Windows Kernbestandteil unseres Azure-Angebots auf dem Weg zu einer verteilten und vereinigten IT-Infrastruktur und Anwendungswelt. Azure wird damit auch für unsere Partner und Kunden weiter an Bedeutung gewinnen. Schon heute gilt, dass Investitionen in Know-how rund um Azure einen schnellen und mehrfachen Return versprechen.

Auch hier haben wir die Schwerpunkte neu zusammengefasst:

Business AI: Die KI-Lösungen für Kundendienste sowie Marketing- und Vertriebsauswertungen werden in die Business Application Group überführt.

Universal Store and Commerce Platform: Diese Lösungsangebot werden Teil der Cloud & AI Platform.

AI Perception & Mixed Reality: Dieses neue Team wird sich um Sprach- und Bilderkennung und andere Wahrnehmungstechniken kümmern.

AI Cognitive & Service Platform: In diesem neuen Team werden die AI Platform, AI Fundamentals, Azure Machine Learning (ML), AI Tools und Cognitive Services weiterentwickelt.

Zusätzlich werden wir die bisherigen AI- und Research-Aktivitäten als dritte Gruppe etablieren. Diese vor zwei Jahren gegründete Gruppe wird sich weiter mit Basisentwicklungen beschäftigen, die Microsoft auch in den kommenden Jahren einen Technologievorsprung bei künstlicher Intelligenz erhalten wird. Denn wir wollen KI-Services noch näher an den Anwender bringen – egal, ob es sich dabei um einen Menschen oder um Maschinen handelt: Closer to the Edge.

 

Visual Studio (C#): ASP.NET Core Web API app

$
0
0

В последнее время наш старый добрый Navision все больше дрейфует в сторону классических продуктов Microsoft (Office, Visual Studio). В данном посте я делаю попытку написать на Visual Studio (C#): ASP.NET Core Web API приложение. Сразу скажу, что скорость и простота написания и публикации подобного приложения меня просто поразили. В последствии (следующий пост) я планирую использовать написанный Web API для Business Central (ex: Navision) extension.

Recently, our good old Navision is drifting more and more towards classic Microsoft products (Office, Visual Studio). In this post I try to write on Visual Studio (C#): ASP.NET Core Web API application. I would like to say that the speed and simplicity of writing and publishing such an application just amazed me. Later (next post) I plan to use the written Web API for Business Central (ex: Navision) extension.

GitHub: https://github.com/finn777/ALFnavobjectpermissionsreportNetCoreWebAPI

Начнем. / Let’s go.

В этой точке немного остановимся. Вспомним что со старого поста у нас уже есть SQL база на Azure.
At this point a little stop. Remember that from the old post we already have SQL database on Azure.

В Entity Framework Core предусмотрена функция Reverse Engineering, которая позволяет автоматически создать все необходимые классы по базе данных.
The Entity Framework Core provides a Reverse Egineering feature that allows you to automatically create all the necessary classes for a database.

Tools –> NuGet Package Manager –> Package Manager Console

Scaffold-DbContext “Server=tcp:alexef0test0navsqlserverazure.database.windows.net,1433;Initial Catalog=navobjectpermissionsreportsqldatabase;Persist Security Info=False;User ID=finn777;Password=Trantor2050;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;” Microsoft.EntityFrameworkCore.SqlServer

Модифицируем ValuesController.cs
Modify ValuesController.cs

Все готово. Запускаем.
You're done. Run.

Теперь публикуем.
Now publish.

Теперь Web API доступно в облаке.
The Web API is now available in the cloud.

Тестируем с Postman.
Test with Postman.

Смотрим статистику использования на Azure Portal.

примеры/examples:
https://alfnavobjectpermissionsreportnetcorewebapi.azurewebsites.net/api/values
https://alfnavobjectpermissionsreportnetcorewebapi.azurewebsites.net/api/values/tabledata/32

Литература/Links (Russian):
https://metanit.com/sharp/tutorial/
https://metanit.com/sharp/aspnet5/1.1.php
https://metanit.com/sharp/entityframeworkcore/1.1.php

High Availability for Internet Access using VMSS

Secure Your Office 365 Tenant by Attacking It

$
0
0
By David Branscome

Okay, so the Office 365 Attack simulator is LIVE!

It shows up here in the Security & Compliance portal.

 

 

 

 

 

 

 

 

 

When you click on it, the first thing it will tell you is that there are some things you need to set up before you can run an actual attack. There’s a link that says, “Set up now” (in the yellow box shown below). After you click that link, it says the setup is complete, but you’ll have to wait a little while before running an attack. (I only had to wait about 10 minutes when I set it up)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

It also reminds you that you need to have MFA (multi-factor authentication set up on your tenant in order to run an attack. This makes a lot of sense, since you want to ensure that anyone who runs the attack is a proven “good guy” on your network.

To set up MFA, follow the steps here:

Go to the Office 365 Admin Center

Go to UsersActive users.

Choose MoreSetup Azure multi-factor auth

 

 

 

 

 

 

 

 

Find the people for whom you want to enable MFA. In this case, I am only enabling the admin account on my demo tenant.

Select the check box next to the people for whom you want to enable MFA

On the right, under quick steps, you'll see Enable and Manage user settings. Choose Enable.

 

 

 

 

 

 

 

In the dialog box that opens, choose enable multi-factor auth.

 

 

 

 

 

 

 

 

 

 

The Attacks

Spear Phishing

With a spear phishing attack, I’m sending an email to group of “high-value” users - maybe my IT Admins, the CEO/CFO, the accounting office, or some other user group whose credentials I want to capture. The email contains a URL that will allow me to capture user credentials as part of the attack. (This is also why it’s a requirement to set up MFA on the user account running the attack.) When I set up this attack, it needs to look like it’s coming from a trusted entity in the organization. Maybe I’ll set it up to make it appear las though it’s coming from the IT Security group asking them to verify their credentials.

Brute Force Password (a.k.a., Dictionary Attack)

In this attack, I’m running an automated attack that just runs through a list of words (like a dictionary of passwords) that could be used as a password. It is going to use lots of well-known variations, such a using “$” for “s” and the number 0 for the letter O. If you thought Pa$$w0rd123 was going to cut it as a secure password on your Office 365 account, this attack will show you the error of your ways.

This type of attack is pretty lengthy in nature because there are thousands of potential guesses being made against each user account. The attack can be set up to vary in frequency (time between password guesses) and number of attempts.

It’s important to note that if a password is actually found to be successful, that password is not exposed to anyone – even the admin running the attack. The reporting simply indicates that the attack was successful in identifying the password for Bob@contoso.com, for example.

Password Spray Attack

A password spray attack is a little different from the brute force password attack, in that it allows the admin/attacker to define a password to use in the attack. These would typically be passwords that are meaningful in some way – not simply an attempt using hundreds, or thousands of guesses. The password you use could be something like the name of a football team mascot and the year they won a championship, or the name of a project that people in one department are working on. Whatever criteria you select, you define what password or passwords should be attempted and the frequency of the attempts.

Ready? Let’s go hunting…

Launching a Password Spray Attack

First, I’ll try the password spray attack. I’ve set up several accounts in my test tenant with passwords that are similar to the one I’ll attempt to exploit – which is Eagles2018!. Notice that, by most criteria, that’s a complex password – upper and lower case, alphanumeric and it includes a special character, but it’s also a fairly easily-guessed password, since the Philadelphia Eagles won the Super Bowl in 2018 (though it pains me to say that).

I’ve set up a couple users with that password to ensure I get some results.

I go to my Attack simulator and click on Launch Attack.

 

 

 

 

 

 

The first screen is where I name the attack.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Next, I select the users I want to target. Notice that I can select groups of users as well.

 

 

 

 

 

 

 

 

 

Now I manually enter the passwords I want to use in the attack.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Confirm the settings, click Finish and the attack will begin immediately.

If I go back to my Attack Simulator console, I can see the attack running.

 

 

 

 

 

 

 

 

 

 

 

 

 

After the attack completes, I see the users who have been compromised using the password.

(Yes, I’ve reset their passwords now, so don’t try and get clever.) 😊

 

 

 

 

 

 

 

 

 

 

 

 

Now I politely encourage ChristieC and IrvinS to change their password to help ensure their account security.

Launching a Brute Force Password (Dictionary Attack)

Again, I’ve set up a couple accounts with some pretty common password combinations (P@ssword123, P@ssw0rd!, etc..)

I walk through the configuration of the attack, which is very similar to the Password Spray attack setup.

 

 

 

 

 

 

 

 

 

 

 

 

 

I set up my target users as before, and then I choose the attack settings.

In this case, I uploaded a text file containing hundreds of dictionary passwords, but you can create a sampling of several passwords by entering them manually one at a time in the field above the Upload button.

 

 

 

 

 

 

 

 

 

 

 

 

 

As the attack runs, you’ll see something like the screenshot below. Remember if you have a large number of users and a very large wordlist for the dictionary attack, this attack will run for quite some time as the simulator cycles through all the possible variations for each user.

 

 

 

 

 

 

 

 

 

 

And again, when the simulation is complete, you’ll want to caution DiegoS on his lack of good password hygiene.

In my next blog, I’ll show you how to do a Spear Phishing Attack. These are the REALLY sneaky ones….

Stay tuned!

 

Getting Public Folder Statistics Report in O365/Exchange Online

$
0
0

In this case, the customer was looking for a way to get the following information from the Public Folders in EXO - this was basically his "wish list" of information that he wanted to get:

  • Folder Display name
  • Folder Path
  • Owner
  • Item Count
  • Last Access Time
  • Last Modification Time
  • Total Item Size
  • Creation Time
  • If it is mail enabled or not
  • email address - if it is mail enabled

 They were able to get this information when the folders were in on-premises Exchange, but  have not been able to find a way to get this since they were migrated to O365.  They wanted the information to use it to clean and remove thousands of unused folders but didn't know a way to get the data.

I found a script at https://gallery.technet.microsoft.com/office/Snapshot-report-of-Public-21235573 that was designed to get Public Folder data from Exchange 2010 and returned most of the customer's desired info.

It gets everything except for the Last Access info, which is not available in Exchange Online that I can find.

I'm supplying the script content as text here, and it can then be copied/paste it into a .ps1 file on the machine you intend to run it on, and it will be trusted, as the .ps1 file is seen as having been created on that machine, and not on some foreign machine.

From the original, I had to tweak the syntax to get the PrimarySmtpAddress and to get the list of owners, which I could only retrieve Display Names for.


# This Sample Code is provided for the purpose of illustration only and is not intended to be used in a production environment.
# THIS SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.
# We grant You a nonexclusive, royalty-free right to use and modify the Sample Code and to reproduce and distribute the object
# code form of the Sample Code, provided that You agree: (i) to not use Our name, logo, or trademarks to market
# Your software product in which the Sample Code is embedded; (ii) to include a valid copyright notice on Your software product
# in which the Sample Code is embedded; and (iii) to indemnify, hold harmless, and defend Us and Our suppliers from and against
# any claims or lawsuits, including attorneys’ fees, that arise or result from the use or distribution of the Sample Code.

Write-Progress -Activity "Finding Public Folders" -Status "running get-publicfolders -recurse"
$folders = get-publicfolder -recurse -resultsize unlimited
$arFolderData = @()
$totalfolders = $folders.count
$i = 1
foreach ($folder in $folders)
{
$statusstring = "$i of $totalfolders"
write-Progress -Activity "Gathering Public Folder Information" -Status $statusstring -PercentComplete ($i/$totalfolders*100)
$folderstats = get-publicfolderstatistics $folder.identity
$folderdata = new-object Object
$folderdata | add-member -type NoteProperty -name FolderName $folder.name
$folderdata | add-member -type NoteProperty -name FolderPath $folder.identity
$folderdata | add-member -type NoteProperty -name LastAccessed $folderstats.LastAccessTime
$folderdata | add-member -type NoteProperty -name LastModified $folderstats.LastModificationTime
$folderdata | add-member -type NoteProperty -name Created $folderstats.CreationTime
$folderdata | add-member -type NoteProperty -name ItemCount $folderstats.ItemCount
$folderdata | add-member -type NoteProperty -name Size $folderstats.TotalItemSize
$folderdata | Add-Member -type NoteProperty -Name Mailenabled $folder.mailenabled

if ($folder.mailenabled)
{
#since there is no guarentee that a public folder has a unique name we need to compare the PF's entry ID to the recipient objects external email address
$entryid = $folder.entryid.tostring().substring(76,12)
# $primaryemail = (get-recipient -filter "recipienttype -eq 'PublicFolder'" -resultsize unlimited) | where {$_.externalemailaddress -like "*$entryid"}).primarysmtpaddress
$primaryemail = (get-recipient $folder.mailrecipientguid.tostring()).primarysmtpaddress
$folderdata | add-member -type NoteProperty -name PrimaryEmailAddress $primaryemail
} else
{
$folderdata | add-member -type NoteProperty -name PrimaryEmailAddress "Not Mail Enabled"
}

if ($folderstats.ownercount -gt 0)
{
$owners = get-publicfolderclientpermission $folder.identity | where {$_.accessrights -like "*owner*"}
$ownerstr = ""
foreach ($owner in $owners)
{
$ownerstr += $owner.user.displayname + ","
}
} else {
$ownerstr = ""
}
$folderdata | add-member -type NoteProperty -name Owners $ownerstr
$arFolderData += $folderdata
$i++
}
$arFolderData | export-csv -path PublicFolderData.csv -notypeinformation

Using PowerShell ISE to connect to Exchange Online when MFA is enabled

$
0
0

The customer here wished to be able to use the PowerShell ISE and still be able to connect with MFA (Multi-Factor Authentication) enabled for their account.

We already have a PowerShell module designed to allow connection with MFA enabled, but there is no direct integration of it with the PowerShell ISE.

When the Microsoft Exchange PowerShell app (https://technet.microsoft.com/en-us/library/mt775114%28v=exchg.160%29.aspx) is installed and executed, it launches a PowerShell session and runs a script to load the necessary modules and functions into the current PowerShell execution to provide the cmdlet.

The files related to the app are all loaded into a folder with a path similar to (as this is a Click To Run app):
C:Users<username>AppDataLocalApps2.0LC7A9808.VWQTDNEY3XY.VWXmicr..tion_c3bce3770c238a49_0010.0000_213d7102fbbdf9ba

To find the folder with the files in it, I first went to my Start MenuPrograms folder to find the shortcut for it:
C:Users<username>AppDataRoamingMicrosoftWindowsStart MenuProgramsMicrosoft Corporation

 

When I get the properties of that item, I see:

 

Looking at the Folder path value, I now know the name of the folder the module points to, I can go to the C:Users<username>AppDataLocalApps2.0  folder and find the appropriate folder.

To successfully use the Connect-EXOPSSession cmdlet in the ISE, I need to execute the CreateExoPSSession.ps1 script found in the above folder inside my ISE execution. When it runs, it loads the necessary modules and functions into it so that the Connect-EXOPSSession cmdlet is available and works to connect to Exchange Online with MFA enabled. 


Please upvote product suggestions on UserVoice

$
0
0

I was reviewing UserVoice input for suggestions about CSOM accessing Project Online and Project Server on premises.

I found several suggestions that have high vote counts and which have affected my Project Online customers. Can I ask you to upvote these suggestions?

  1. Clone Project Online PWA instances - https://microsoftproject.uservoice.com/forums/218133-microsoft-project/suggestions/15330594-clone-project-online-pwa-instances
  2. Provide support for backup and restore of a single project - https://microsoftproject.uservoice.com/forums/218133-microsoft-project/suggestions/12261789-provide-support-for-backup-and-restore-of-a-single, which is essentially the same as #3 below.

Additionally, I really like these ideas.

  1. Allow an alias to be associated with Baseline, Baseline 1, Baseline 2 etc - https://microsoftproject.uservoice.com/forums/218133-microsoft-project/suggestions/12277935-allow-an-alias-to-be-associated-with-baseline-bas
  2. Open multiple instances of Project for side-by-side comparison - https://microsoftproject.uservoice.com/forums/218133-microsoft-project/suggestions/13586853-open-multiple-instances-of-project-for-side-by-sid

Please help me improve the product by upvoting with me.

Printing to on-prem printers from Azure AD-joined devices

$
0
0

Since most of us have been using Active Directory for ages, you probably also take for granted the process for printing to your on-premises printers:  They are published to Active Directory, you can search for them, and you can easily connect to them.  But what happens when you join your devices to Azure AD instead?  As discussed in https://blogs.technet.microsoft.com/mniehaus/2018/02/21/afraid-of-windows-10-with-azure-ad-join-try-it-out-part-2/, you can certainly connect to a Windows Server (AD-joined) hosted print queue if you know its UNC path (authentication works via a Kerberos ticket).  But how do you discover that printer in the first place?  The low-tech approach is to label the printer with its UNC path, so you just walk up to it, read the UNC, and type it in.

Fortunately, we’ve released a higher-tech approach that solves two different problems:

  • Discovering Windows Server-hosted printers from Azure AD-joined devices
  • Printing to those printers from anywhere in the world

This solution is called Hybrid Cloud Printing because it connects those Azure AD-joined devices to your existing Active Directory-joined Windows Server printing infrastructure.  That was announced back in February via the Enterprise Mobility blog.  As you can probably guess from the diagram included in that blog, there are a few components in the solution:

First, some requirements:

  • Windows 10 1703 (Creators Update) or higher.
  • Windows Server 2016 on your print servers.
  • An Azure AD tenant.
  • Azure AD Connect, to synchronize your Active Directory with Azure AD.
  • An MDM service, e.g. Intune, to configure the print settings on each device.

Then, you need to set it up.  To make the connection from internet-facing Azure AD-joined devices to those on-prem Windows Server 2016-hosted services, Azure Application Proxy is used.  This makes an outbound connection to Azure, which is used to then allow inbound traffic to the published services.  There are two ways this can be set up:

  • Azure Active Directory pre-authentication, where Azure AD makes sure the user is authenticated before the traffic passes through the proxy.
  • Passthrough authentication, letting Windows Server authenticate the users via Kerberos.

Typically the first option is recommended, to support things like conditional access and multi-factor authentication.  I recommend sticking with that that recommendation, following the “five” steps at https://docs.microsoft.com/en-us/windows-server/administration/hybrid-cloud-print/hybrid-cloud-print-deploy:

  1. Install Azure AD Connect to sync between Azure AD and AD.  I covered that in https://blogs.technet.microsoft.com/mniehaus/2018/01/19/afraid-of-windows-10-with-azure-ad-join-try-it-out-part-1/.
  2. Install the Cloud Print package on the print server.
  3. Install the Azure Application Proxy.
  4. Configure the MDM policies.
  5. Publish desired shared printers.

Since each of those steps includes multiple sub-steps, it’s really more like a 20-25 step process.  And initially I walked through all of those steps – and it didn’t work at all.  Fortunately, one of the ConfigMgr/Intune MVPs, Sandy Yinghua, has a blog that documents how she did it.  I used those steps to verify my configuration and with that was able to get everything working.  I would note that I did things a little differently:

  • I didn’t use a custom internet URL (mcs.smsboot.com in her example) for the Azure App Proxy-published websites.  Instead, I used the generated msappproxy.net URLs because then I didn’t need additional certs.
  • I did get a public cert for my print server, so that I didn’t need to distribute a trusted root cert to the client machines when they talk to the print server via the Azure App Proxy.  Self-signed certs aren’t any fun (although Intune has no issues deploying them, so it wouldn’t be that bad).

But otherwise, my setup matches.  Here are the device configuration settings I configured in Intune:

What those look like in Intune:

image

And here are a few commands that I used as I was experimenting with the printer publishing process:

Publish a test printer

Publish-CloudPrinter -Printer EcpPrintTest -Manufacturer Microsoft -Model FilePrinterEcp -OrgLocation '{"attrs": [{"category":"country", "vs":"USA", "depth":0}, {"category":"organization", "vs":"AutoPilotRocks", "depth":1}, {"category":"site", "vs":"Redmond", "depth":2}, {"category":"building", "vs":"109", "depth":3}]}’ -Sddl "O:BAG:SYD:(A;;LCSWSDRCWDWO;;;S-1-5-21-501278528-1731656756-2472999879-1114)(A;OIIO;RPWPSDRCWDWO;;;S-1-5-21-501278528-1731656756-2472999879-1114)(A;OIIO;GA;;;CO)(A;OIIO;GA;;;AC)(A;;SWRC;;;WD)(A;CIIO;GX;;;WD)(A;;SWRC;;;AC)(A;CIIO;GX;;;AC)(A;;LCSWDTSDRCWDWO;;;BA)(A;OICIIO;GA;;;BA)" -DiscoveryEndpoint https://clouddiscoveryproxy-autopilotrocks.msappproxy.net/mcs -PrintServerEndpoint https://cloudprintproxy-autopilotrocks.msappproxy.net/ecp -AzureClientId 23772c3e-1a3b-4f28-86f9-ec614a53a776 -AzureTenantGuid 0b458c6e-97ce-4c7e-bc5e-1d29552989a5

Query published printers

Publish-CloudPrinter -Query -DiscoveryEndpoint https://clouddiscoveryproxy-autopilotrocks.msappproxy.net/mcs -AzureClientId 23772c3e-1a3b-4f28-86f9-ec614a53a776 -AzureTenantGuid 0b458c6e-97ce-4c7e-bc5e-1d29552989a5

Unpublish a printer

Publish-CloudPrinter -Unpublish -Printer EcpPrintTest -DiscoveryEndpoint https://clouddiscoveryproxy-autopilotrocks.msappproxy.net/mcs -PrintServerEndpoint https://cloudprintproxy-autopilotrocks.msappproxy.net/ecp -AzureClientId 23772c3e-1a3b-4f28-86f9-ec614a53a776 -AzureTenantGuid 0b458c6e-97ce-4c7e-bc5e-1d29552989a5

Trying it out

Let’s look at the client experience then.  The first sign of something different is on the “Printers & scanners” page in Settings.  There is a new “Search for cloud printers” link:

image

When you click that, you can then browse from a list of available locations, presented in a hierarchy that you can define:

image

And you can search for printers by keyword (or just leave the keyword blank to get all printers in that location):

image

And finally, select the printer and click “Add device” to add it:

image

Finally, I have a printer, with a slightly different icon to show that it’s a cloud printer:

image

And printing works, from any internet-connected location (thanks to the Azure Application Proxy).  The particular “printer” in this case is useful for testing, as it doesn’t waste any paper:  It just drops an XPS file into a folder on the server.

image

Check out the Ignite 2017 video at https://www.youtube.com/watch?v=Bvt1L--lqE4 for more information.  And with any luck, this will be integrated into Windows Server 2019 and easier to set up.

Triaging a DLL planting vulnerability

$
0
0

DLL planting (aka binary planting/hijacking/preloading) resurface every now and then, it is not always clear on how Microsoft will respond to the report. This blog post will try to clarify the parameters considered while triaging DLL planting issues. 

It is well known that when an application loads a DLL without specifying a fully qualified path, Windows attempts to locate the DLL by searching a well-defined set of directories in an order known as DLL search order. The search order used in the default SafeDllSearchMode is as below: 

  1. The directory from which the application loaded. 
  2. The system directory. Use the GetSystemDirectory function to get the path of this directory.  
  3. The 16-bit system directory. There is no function that obtains the path of this directory, but it is searched. 
  4. The Windows directory. Use the GetWindowsDirectory function to get the path of this directory. function to get the path of this directory. 
  5. The current directory 
  6. The directories that are listed in the PATH environment variable. Note that this does not include the per-application path specified by the App Paths registry key. The App Paths key is not used when computing the DLL search path. 

The default DLL search order can be changed with various options as noted in one of our previous blogpost “Load Library Safely”.  

A DLL loading in an application becomes a DLL planting vulnerability if an attacker can plant the malicious DLL in any of the directories searched per the search order, and the planted DLL is not found in the prior directories searched that attacker has no access to it. For example, an application loading foo.dll that is not present in either application directory or system directory or windows directory can provide an opportunity for an attacker to plant foo.dll if he has access to the current working directory. DLL planting vulnerabilities are very convenient and is less work for an attacker, it gives very easy code execution since the DllMain() gets called immediately on loading the DLL. Attackers don’t have to worry about bypassing any mitigation if the application allows loading non-signed binaries. 

Based on where the malicious DLL can be planted in the DLL search order the vulnerability broadly falls into one of the three categories: 

  1. Application Directory (App Dir) DLL planting. 
  2. Current Working Directory (CWD) DLL planting. 
  3. PATH Directories DLL planting. 

The above categories are what guides our response. Let’s look at these categories to see how we triage each of them. 

Application Directory (App Dir) DLL planting 

Application directory is where an application keeps its dependent non-system DLLs and trusts them to be intact. Files located in a program's installation directory are presumed to be benevolent, trustworthy and a directory ACL security control is typically used to safeguard them. Anyone able to replace a binary in the installation directory, presumably has the privileges necessary to write/overwrite files. The application directory is considered a code directory, where code related artifacts for the application should be stored. If an attacker can achieve DLL overwrite within the application directory without being on the directory ACL, it’s a much bigger issue than replacing/planting a single DLL. 

Let’s look at some of the scenario involved with application directory DLL planting: 

Scenario 1: Malicious binary planting in a trusted application directory. 

Applications installed properly generally safeguard the application directory with ACLs, elevated access (typically admin) is required to modify the content of the application directory in this scenario. For example, Microsoft Word’s installation location is “C:Program Files (x86)Microsoft OfficerootOffice16”. An admin access is required to modify anything in this directory. A victim, who has admin rights, can be tricked/socially engineered to plant DLLs in a trusted location but if such is the case, they can be tricked/social engineered to do worse things. 

Scenario 2: Malicious binary planted in an untrusted application directory. 

Application installed via XCOPY without installer being used, available on a share, downloaded from internet, standalone executable in a non ACLed directory are some of the scenarios that falls under untrusted category. For example, an installer (including redistributable, setup.exe generated by ClickOnce, and self-extracting archives generated by IExpress) downloaded from internet and running from default “Downloads” folder. Launching an application from an untrustworthy location is dangerous, a victim can be easily tricked/fooled to plant DLLs into these untrusted locations.  

 

A DLL planting issue that falls into this category, Application Directory DLL planting, is treated as Defense-in-Depth issue that will be considered for updates in future versions only. We resolve any MSRC case that fall in this category as vNext consideration, mainly due to the amount of social engineering involved in the attack and the by design nature of the bug. A victim would have to be tricked into placing the malicious DLL (malware) where it can be triggered AND perform a non-recommended action (like running an installer in the same directory as the malware). A non-installed application has no reference point for "known good directory/binaries", unless it creates the directory itself. Ideally, the installer should create a temporary directory with a randomized name (to prevent further DLL planting), extract its binaries to it and use them to install the application. While an attacker can make use of a drive-by download to place the malware on the victim's system, such as into the "Downloads" folder, the essence of the attack is social engineering.  

In Windows 10 Creators Update we added a new process mitigation that can be used to mitigate the Application Directory DLL planting vulnerabilities. This new process mitigation, PreferSystem32, when opted in toggles the order of application directory and system32 in the DLL search order. Because of this any malicious system binary can’t be hijacked by planting it in the application directory. This can be enabled for the scenarios where the process creation can be controlled. 

Current Working Directory (CWD) DLL planting 

Applications typically set the directory from where they are invoked as the CWD, this applies even when the application is invoked based on the default file association. For example, clicking a file from the share “D:tempfile.abc”’ will make “D:temp” as the CWD for the application associated with the file type .abc.  

The scenario of hosting files in a remote share, especially a webdav share, makes CWD DLL planting issues more vulnerable. This way an attacker can host the malicious DLL along with the file and social engineer the victim to open/click the file to get the malicious DLL loaded into the target application 

Scenario 3: Malicious binary planted in the CWD. 

Application loading a DLL not present in any of the first three trusted location will look for the same in the untrusted CWD. Victim opening a .doc file from the location \server1share2 will launch Microsoft Word, if the Microsoft Word can’t find one of its dependent DLL oart.dll in the trusted location it will try to load it from the CWD \server1share2. Since the share is an untrusted location attacker can easily plant oart.dll to feed into the application. 

Trigger => \server1share2openme.doc
Application  => C:Program Files (x86)Microsoft OfficerootOffice16Winword.exe
App Dir=> C:Program Files (x86)Microsoft OfficerootOffice16
CWD => \server1share2
Malicious DLL  => \server1share2OART.DLL
 

A DLL planting issue that falls into this category of CWD DLL planting is treated as an Important severity issue and we will issue a security patch for this. Most of the DLL planting issue that we have fixed in the past falls into this category, the advisory 2269637 lists a subset of them. This brings to a question why any Microsoft applications would load DLLs that are not present in its application directory or System directory or Windows directory. It so happens that there are various optional components, different OS editions and multiple architectures that come with different set of binaries that sometimes applications fail to consider or verify effectively before loading the DLLs. 

PATH Directories DLL planting 

The last resort to find the DLLs in the DLL search order is the PATH directories, which is a set of directories that has been added by various applications to facilitate user experience in locating the application and its artifacts.   

The directories that are in the PATH environment variable are always admin ACLed and a normal user can’t modify contents of these directories. If we have a world writable directory exposed via PATH, then it is a bigger issue than just the single instance of DLL planting and we deal with that as an important severity issue. But just the DLL planting issue is considered as a low security issue since we don’t expect to cross any security boundary with this planting vulnerability. Thus, DLL planting issues that fall into the category of PATH directories DLL planting are treated as won’t fix.  

Conclusion 

We hope this clears up questions on how we triage a reported DLL planting issue and what situations we consider to be severe enough to issue a security patch. Below is a quick guide to what we fix/won’t fix via a security release (down level). 

What Microsoft will address with a security fix 

CWD scenarios - Like an associated application loading a DLL from the untrusted CWD. 

What Microsoft will consider addressing the next time a product is released 

Application directory scenarios – This is at complete discern of product group based on whether it is an explicit load or implicit load. Explicit load can be tweaked but the implicit loads (dependent DLLs) are strictly by-design as the path can’t be controlled.  

What Microsoft won't address (not a vulnerability) 

PATH directory scenarios – Since there can’t be a non-admin directory in the PATH this can’t be exploited. 

 

-----

Antonio Galvan, MSRC

Swamy Shivaganga Nagaraju, MSRC Vulnerabilities and Mitigations Team 

Secure Your Office 365 Tenant – By Attacking It (Part 2)

$
0
0

By David Branscome

 

In my previous post (https://blogs.technet.microsoft.com/cloudyhappypeople/2018/04/04/secure-your-office-365-tenant-by-attacking-it/), I showed you how to use the Office 365 Attack Simulator to set up the Password Spray and Brute Force Password (Dictionary) Attacks.

What we often find, though is that spear phishing campaigns are extremely successful in organizations and are often the very first point of entry for the bad guys.

Just for clarity, there are “phishing” campaigns and there are “spear phishing” campaigns.

A phishing campaign is typically an email sent out to a wide number of organizations, with no specific target in mind. They are usually generic in nature and are taking the approach of “spreading a wide net” in hopes of getting one of the recipients to click on a URL or open an attachment in the email. Think of the email campaigns you’ve likely seen where a prince from a foreign country promises you $30 million if you’ll click on this link and give him your bank account information. The sender doesn’t particularly care who gets the email as long as SOMEBODY clicks on the links.

On the other hand, a spear phishing campaign is much more targeted. In a spear phishing campaign, the attacker has a specific organization they are trying to compromise – perhaps even a specific individual. Maybe they want to compromise the CFO’s account so that they can fraudulently authorize money transfers from the organization by sending an email that appears to be coming from the CFO. Or maybe they want to compromise a highly-privileged IT Admin’s email account so that the attacker can send an email asking users to browse to a fake password reset page and harvest user passwords. The intent with a spear phishing campaign is to make the email look very legitimate so that the recipients aren’t suspicious – or perhaps they even feel obligated to do as instructed.

What Do I Need?

As you can imagine, setting up a spear-phishing campaign takes a little more finesse than a brute force password attack.

First, decide WHO the sender of the spear phishing email will be. Maybe it’s HR requesting that you log in and update your benefits information. Or perhaps it’s the IT group asking everyone to confirm their credentials on a portal they recently set up.

Next, decide WHO you want to target with the campaign. It may be the entire organization, but keeping a low profile as an attacker also has its advantages.

You’ll want to use a relatively realistic HTML email so that it looks legitimate. The attack simulator actually provides two sample templates for you, as we’ll see below. Using the sample templates makes the campaign very easy to set up, but as you get more comfortable using the attack simulator, you will likely want to craft your own email to look more like it’s coming from your organization.

That should be enough to get us started.

Launching a Spear Phishing Attack

In the Attack simulator console, click on “Launch Attack”.

 

At the Provide a name to the campaign page, choose your own name, or click on “Use template”. If you click on “Use template” you will see two template options to choose from. I’ve chosen “Prize Giveaway” below:

 

Next, select the users you want to “phish”.

 

 

On the next page, if you’ve selected a template, all the details will be filled in for you. One important value to note here is the Phishing Login server URL. Select one of the phishing login servers from the drop down. This is the way the attack simulator is able to track who has clicked on the URL in the email and provides reporting.

 

 

In the Email body form, you can customize the default email. Make sure that you have a variable ${username} so that the email looks like it was sent directly to the end user.

 

 

Click Confirm and the Attack Simulator will send emails out to the end users you specified.

I opened the administrator account and saw this:

 

 

Notice that it customized the email to the MOD Administrator account in the body of the email.

If I click on the URL (this the http://portal.prizesforall.com URL we highlighted earlier) I get sent to a website that looks like this.

 

Finally, if I click on the reporting area of the Attack Simulator, I can see who has clicked on the link and when.

 

 

Okay. But seriously…would you really have clicked on that URL?

Probably not.

So how do you make it a little more sophisticated?

Let’s create a more realistic attack.

In this attack we will use the Payroll Update template, which is very similar to what you might actually see in many corporate environments.  You can also create your own HTML email using your organization’s branding and formatting.

 

 

I’ll again target the MOD Administrator because he seems like a good target, since he’s the O365 global admin (and seems to be somewhat gullible).

In this situation though, instead of sending from what appears to be an external email address (prizes@prizesforall.com, used in the previous attack) I’m going to pose as someone the user might actually know. It could be the head of HR or Finance or the CEO. I’ll use the actual email address of that person so that it resolves correctly.

Notice that this templates uses a different phishing login server URL from the drop down. You’ll see why in a second.

In the Email body page, we’ve got a much more realistic looking email.

It should be noted, though, that if you make the email look ABSOLUTELY PERFECT and people click on the URL, what have they learned? It’s best to provide a clue in the email that a careful user would notice and recognize as a problem. Maybe send the official HR email from someone who isn’t actually in HR, or leave off a footer in the email that identifies it as an official HR email. Whatever it is, there should be something that you can use to train users to look out for.

 

 

Again, you Confirm the settings for the attack and the attack launches.

Going to the MOD Administrator’s mailbox….that’s much more realistic, wouldn’t you say?

 

 

When I click on the “Update Your Account Details” link, I get sent here, where I’m asked to provide a username and password, which of course, I dutifully provide:

 

 

Notice, however, that the URL at the top of the page is the portal.payrolltooling.com website - even thought the page itself looks like a Microsoft login page. Many attacks will mimic a "trusted site" to harvest credentials in this manner. When you're testing you can use any email address (legitimate or not) and any password for testing - it isn't actually authenticating anything.

The result is I am directed to this page, which lets me know I’ve been “spear phished” and provides some hints about identifying these kind of attacks in the future:

 

 

And finally, in the reporting, I see that my administrator was successfully spear phished.

 

The Value of Attack Simulations

This is all interesting (and a little bit fun) but what does it really teach us? The objective is that once we know what sort of attacks our users are vulnerable to (password or phishing are the two highlighted by this tool), then we can provide training to help enhance our security posture. Many of the ransomware attacks that are blanketing the news lately started as phishing campaigns.

If we can take steps to ensure that our users can identify suspicious email, and help them select passwords that aren't easily compromised, we help increase the organization's security posture.

 

 

 

 

 

Upcoming Microsoft 365 Security & Compliance Partner Practice Enablement Training Events

$
0
0

Starting in late April and running through to mid-May are three day partner focused Microsoft 365 Security and Compliance Partner Practice Enablement events. We ran a pilot of this in Sydney earlier in the year, and the upcoming deliveries will have refreshed content covering the latest changes and announcements relating to Microsoft 365 and the Modern Workplace.

Brisbane 30th April - 2nd May 2018

Sydney 7th May - 9th May 2018

Melbourne 14th May - 16th May 2018.

Following are the official event details.

This 3-day, instructor-led course offers technical training covering Security and Compliance topics including: Identity & Access Management, Enterprise Level Identity Protection, Proactive Attack Detection & Prevention, Controlling & Protecting Information, and GDPR and Regulatory Compliance. This content is delivered through lectures, case studies, videos, demos, and hands on labs. This unique event compliments those by offering technical training intended to help Microsoft partners understand, deploy and manage the inter-related mix of technologies enabling today’s modern workplace to secure data and comply with regulation while function efficiently in a cloud and mobile dominated world. REGISTER HERE

Register early to get your spot, class sizes will be limited.

 

Viewing all 36188 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>