Quantcast
Channel: TechNet Blogs
Viewing all 36188 articles
Browse latest View live

Microsoft 365導入・活用支援、保守サポートサービス(マイクロリンク)

$
0
0

[提供: 株式会社マイクロリンク]

東海地区中心に、中小企業のお客様のMicrosoft 365の導入、技術サポートが可能です。OSやOfficeのバージョンアップを検討中のお客様、ぜひご相談ください。

 

■株式会社マイクロリンクがご提供するMicrosoft 365導入・活用支援、保守サポートサービス とは

当社は主に中小企業向けにMicrosoftのクラウドサービスを数多く構築してまいりました。
Microsoft Gold Partnerの認定を受け、常に高い技術、情報を得て、質の高いサービス提供を行っております。

Microsoft 365製品につきましてもご提案から環境構築、導入支援、運用支援、保守サポートまでコンサルテーションしながらお客様のニーズに合わせた最適なサービスをご提供致します。

まずはお気軽にお問い合わせ下さい。

 

 

 

 


Microsoft 365導入支援、定着・トレーニング支援、技術サポートサービスのトータルソリューション(タッチ)

$
0
0

[提供: 有限会社タッチ]

豊富なサポート経験、パソコンやサーバーの知識をもつ弊社から、お客様のMicrosoft 365の導入、技術支援のサービスも開始いたしました。OSやOfficeのバージョンアップを検討中のお客様、ぜひご相談ください。

 

■有限会社タッチがご提供するMicrosoft 365導入支援、定着・トレーニング支援、技術サポートサービスのトータルソリューション とは

Microsoft 365の導入時のご相談から導入後のフォローまでトータル的な
安心のサポートサービスを提供します。

 

「Microsoft 365」は、次のMicrosoftのサービスが統合された
大変お得なサービスです。
・Office 365
・Enterprise Mobility + Security(EMS)
・Windows 10

ビジネスツールもOSもセキュリティもデバイス管理も全部まとめた
パッケージMicrosoft 365は、こんな中小企業様におすすめなサービスです。
・IT担当者が不在、または人手不足の企業
・Active Directory をご利用でない企業
・Office365とのコラボレーション機能を活用したい企業
・デバイス管理とセキュリティのソリューションを必要とする企業
・BCP対策を迫られて悩まれている企業

弊社では、Microsoft 365の導入をご検討される企業様に寄り添いながら
以下のサポートサービスを提供しております。
・導入検討時の、導入プランのご提案、PCの選定、技術支援など
・導入時のPC設定から社内教育・トレーニングなど
・導入後のトラブルや不具合発生時のサポートなど
お客様のご希望に合わせたサポートプランにより、安心してご利用いただく
ためのサービスを提供しています。

20年以上の豊富なサポート経験、パソコンやサーバーの知識をもつ弊社だから
こそ提供できる安心のサポートサービスです。

OSやOfficeのバージョンアップや、より安全性の高いクラウド環境などを
ご検討中のお客様、是非お気軽にご相談ください。

【お問い合わせ先】
ITサポートセンター タッチ
TEL:052-806-8899

 

 

 

 

Azure AD の条件付きアクセスに関する Q&A

$
0
0

こんにちは、Azure & Identity サポート チームの高田です。

今回はお問い合わせをよくいただく、Azure AD の条件付きアクセスについてです。

お問い合わせの多いご質問について、Q&A 形式でおまとめいたしました。既存のドキュメントではカバーされていない動作や質問について今後も適宜内容を拡充していきますので、ご参照いただければと思います。

 


 

Q. Office 365 を利用しているが、条件付きアクセスを利用できますか?

A. はい、利用可能です。Office 365 をご利用いただいているお客様は、認証基盤として Azure AD をご利用いただいている状態となります。そのため、追加で Azure AD Premium のライセンスを購入いただくことで、利用可能になります。

 


 

Q. Azure AD Application Proxy を利用して公開しているアプリケーションなども条件付きアクセスで制御可能でしょうか。

A. はい、条件付きアクセスで制御可能です。Azure AD 上に登録されているアプリケーションであれば、条件付きアクセスで制御できます。Azure AD Application Proxy を利用して公開しているアプリケーションやご自身で開発し Azure AD 上に登録したアプリケーションも制御対象とすることが可能です。

 


 

Q. Azure AD B2B コラボレーション機能により招待されたゲスト ユーザーに対して条件付きアクセスのルールを適用する場合には、Azure AD Premium のライセンスを購入する必要があるのでしょうか。

A. いいえ、テナントに割り当てられている Azure AD Premium ライセンス数の 5 倍までのアカウントであれば、ゲスト ユーザーに対して条件付きアクセスを含む Azure AD Premium の機能を利用させることが可能です。詳細は下記公開情報を参照ください。

Azure Active Directory B2B コラボレーションのライセンスに関するガイダンス
https://docs.microsoft.com/ja-jp/azure/active-directory/active-directory-b2b-licensing

 


 

Q. Azure AD Premium のライセンスを対象人数分購入すれば、該当ユーザーに割り当てる必要はないでしょうか。

A. いいえ、要件として人数分購入いただくだけでなく、ユーザーに対して割り当てる必要がございます。

 


 

Q. 条件付きアクセスを利用するためには、Azure AD Premium のライセンス数を何個購入すればよいでしょうか?

A. 条件付きアクセスの機能を利用してアプリケーションへのアクセス可否の評価が行われるユーザーに対して、Azure AD Premium (P1 以上) を割り当てる必要があります。現時点の実装では、Azure AD Premium ライセンスを割り当てていないユーザーであっても、ポリシーの対象であれば条件付きアクセス ポリシーの内容に従ってアクセス制限が行われますが、このような状態での利用はライセンス違反となります。

 


 

Q. 条件付きアクセスのポリシーを複数作成し、適用の優先順位をつけることは可能でしょうか。

A. いいえ、優先順位を作成することはできません。条件付きアクセスではそれぞれのポリシーが独立しており、条件に合致したものが適用されます。各ポリシーで条件が重複しないように構成することを検討ください。

 


 

Q. Exchange Online に対して条件付きアクセスを設定したところ Office 365 ポータルに対しても条件付きアクセスが設定されてしまいました。これは想定される動作でしょうか?

A. はい、これは想定される動作です。 2017 年 8 月 24 日以降は Exchange Online または SharePoint Online を対象とした条件付きアクセスが Office 365 ポータルにも反映されます。詳細は英語での情報となりますが以下のリンクも参照ください。

An update to Azure AD Conditional Access for Office.com
https://cloudblogs.microsoft.com/enterprisemobility/2017/08/04/an-update-to-azure-ad-conditional-access-for-office-com/

 


 

Q. 条件付きアクセスの [場所] の条件にクライアントの IP アドレス範囲を入れましたが制御されません。どうしてでしょうか?

A. 条件付きアクセスの [場所] の条件では、組織が外部と通信する際のグローバル IP アドレス (Azure AD から見た送信元グローバル IP アドレス) を利用します。例えば、社内のクライアントがプライベート IP アドレスを保持しており、外部ネットワークと通信する際にはグローバル IP アドレスを持つゲートウェイを経由して Azure AD と通信する環境があるとします。この場合、Azure AD から見ると、送信元 IP アドレスはグローバル IP アドレスを持つゲートウェイとなります。このような時は、件付きアクセスの [場所] の条件には、ゲートウェイのグローバル IP アドレスを指定ください。

 


 

Q. 条件付きアクセスで、X-Forwarded-For HTTP ヘッダーを利用して、組織内のクライアントの送信元 IP アドレスを判断可能ですか?

A. いいえ、X-Forwarded-For HTTP ヘッダーを使用し、条件付きアクセスで組織内のクライアントの送信元 IP アドレスを判定することはできません。条件付きアクセスの [場所] の条件では、組織が外部と通信する際のゲートウェイが持つグローバル IP アドレス (Azure AD から見た送信元グローバル IP アドレス) が制御に利用されます。Azure AD には、このグローバル アドレスを場所として利用します。

X-Forwarded-For HTTP ヘッダーは HTTP ヘッダー フィールドの 1 つです。ロード バランサーなどでクライアントの送信元 IP アドレスが変換された場合でも、HTTP ヘッダーに接続元のクライアント IP アドレスの情報を付加することで、接続先サーバーが接続元クライアント IP アドレスを特定できるようにするために利用されます。しかしながら、この X-Forwarded-For HTTP ヘッダーで指定された情報は組織内の IP アドレスであり、場所を示すものではありません。このような理由から、現状 Azure AD では、組織が外部と通信する際のゲートウェイのグローバル IP アドレス (Azure AD から見た送信元グローバル IP アドレス) を制御に利用しています。

 


 

Q. クレームルールと条件付きアクセスは併用は可能ですか?

A. はい、技術的には可能です。AD FS を利用したフェデレーション環境であれば、クレーム ルールが判定された後、条件付きアクセスが動作します。クレーム ルールで認証が拒否された場合は、その後の条件付きアクセスの処理は動作しません。ただし、類似機能であるため運用の複雑さなどを考慮すると、どちらか一方の機能をメインでご利用いただくのがよいかと存じます。

 


 

Q. 条件付きアクセスの設定において、全てユーザーがアクセスできない設定になってしまいました。設定を解除可能でしょうか。

A. このような状況の場合は、残念ながらお客様側での解除はできません。そのため、設定の解除をご希望の場合は、お手数ですが弊社サポート サービスをご利用いただけますと幸いです。Azure ポータルにもアクセスができない状況と存じますので、ほかにお持ちのテナントからお問い合わせを発行ください。

サポート サービスをご利用いただくには、Azure ポータル上から、Azure Active Directory を選択し、[新しいサポート要求] を選択ください。以下のような画面からお問い合わせいただければと思います。

 

 

上記内容が少しでもお客様の参考となりますと幸いです。

製品動作に関する正式な見解や回答については、お客様環境などを十分に把握したうえでサポート部門より提供させていただきますので、ぜひ弊社サポートサービスをご利用ください。

※本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

AI をヘルスケア分野のパートナーと連携してがん治療に適用

$
0
0

[ブログ投稿日:2017年11月28日]

Posted by:アリソン リン(Allison Linn

InnerEye プロジェクトの主任研究員、アントニオ クリミニシ(写真:ジョナサン バンクス)

 

マイクロソフトの英国ケンブリッジ研究所の人工知能専門家チームは、AI によるがん治療を、よりターゲットの定まった効果的なものにする方法の発見に 10 年以上の期間を費やしてきました。

今、このようなプロジェクトのひとつである InnerEye の研究チームは、医療専門家ががん治療の計画に使用しているツールへの研究成果の統合に関する理解を深めるために、サードパーティのソフトウェアプロバイダーの支援を求めています。これは、マイクロソフトの Healthcare NExT という取り組みの一環です。

火曜日に、シカゴで開催された Radiological Society of North America 年次会議の全体スピーチで、同プロジェクト主任研究員であるアントニオ クリミニシ (Antonio Criminisi) は、プライベートプレビューの目的が、プロジェクトの研究成果のサードパーティ医療ソフトウェア製品への統合を支援してくれるパートナーを見つけることであると述べました。

「これは私たちにとって大きな学びの機会です」とマイクロソフト英国研究所主任研究員であるクリミニシは述べています。

InnerEye 研究プロジェクトは、機械学習と画像認識というAIの2つの主要な領域を活用し、良性と悪性の腫瘍の判別を明確にし、放射線腫瘍科医が放射線療法で使用できるツールを医療ソフトウェアプロバイダーが提供できるよう支援します。

このクラウドベースの”radiomics”(放射線医学の多量の情報を系統的に扱う科学)サービスは、放射線腫瘍科医と線量測定士が結果の編集や調整などのより詳細な作業に集中できるよう支援する製品を開発できるようにすることを目的にしています。

たとえば、現時点では、画像の境界線付け作業は時間を要し、高コストな手作業のプロセスです。そのため、多くの場合、この作業は治療の最初に一度だけ行われています。

InnerEye のテクノロジを使用したサードパーティのソリューションにより、治療期間中に病状をモニターし、患者の反応に応じて化学療法を調整するなどの選択を行うことが現実的になります。これにより、将来的に、よりターゲットが定まり効果的な化学療法が実現する可能性があります。

先進的 AI のがん治療への応用を長年研究してきたクリミニシは、研究成果がようやく現実の医療界に貢献し、社会的利益をもたらし始めたことを大変うれしく思っていると同時に、彼のチームの専門は AI の研究であり、ヘルスケアではないことから、研究成果を最善の形で使用するために外部のパートナーの支援を求めていると述べます。

「自分たちだけでやることはできないのです」とクリミニシは述べています。

 

ーーー

本ページのすべての内容は、作成日時点でのものであり、予告なく変更される場合があります。正式な社内承認や各社との契約締結が必要な場合は、それまでは確定されるものではありません。また、様々な事由・背景により、一部または全部が変更、キャンセル、実現困難となる場合があります。予めご了承下さい。

Office 365 コンプリートパック

$
0
0

[提供: 日本ビジネスシステムズ株式会社]

スムーズに活用できるように、契約から利用開始までに発生する各種さまざまな準備作業をサポートします。

 

■日本ビジネスシステムズ株式会社がご提供するOffice 365 コンプリートパック とは

 

<サービス概要>
Office 365 ProPlus の導入を、JBS がトータルサポートします!

 

● 計画
豊富な導入実績をもとに、お客様の環境に最適な導入計画を立案します。
お客様の状況に寄り添ったスムーズな導入計画や導入後の更新フローの検討を行います。

 

● Office 365 環境の用意
スムーズな Office の利用のため、Office 365 環境とオンプレミス環境を連携する用意を行います。
※本項目は必須ではありませんので、まずはお客様の要件等をヒアリングし連携の要否を判断したうえで提案します。すでに Office 365 を利用中の場合、本項目は不要です。
※お客様にて一部作業(ライセンス登録等)を対応いただきます。

 

● Office 365 ProPlus の導入

 

● ヘルプデスクサービス
リモート型、常駐型などお客様のニーズに合った形態で提供します。
※ヘルプデスクサービスはオプションです。・ソリューション説明

 

●前提条件
・導入後の Office バージョンは、導入作業完了時点における最新版 Office となります。
・Office 365 ProPlus は Web 版の Office ではなく、通常時はローカルにインストールされた Office をオフラインで利用できますが、30日に一度はインターネット接続を行わないと機能制限モードになり利用できる機能が限定されます。
・Office 365 ProPlus の仕様上、Office はアップグレードではなく新規のインストールとなります。そのため、既存の Office で利用されているカスタマイズ設定やマクロは引き継げません。JBS による対応をご要望の場合は別途お見積りさせていただきますので、ご相談下さい。

 

 

 

BlogMS Microsoft Team Blogs – November 2017 Roll-up

Cloud First, Safety First

$
0
0

Logbucheintrag 171204:

In jeder Stunde entsteht in Deutschland ein wirtschaftlicher Schaden von mehr als 100.000 Euro durch Hackerangriffe, Datenverlust und Systemausfälle. Diese Zahl ergibt sich jedenfalls, wenn man den Analysten folgt, die einen Schadenswert von rund einer Milliarde Euro pro Jahr errechnet haben. Dabei sind die nicht gemeldeten oder nicht einmal wahrgenommenen Angriffe auf die Computersysteme von Unternehmen nur geschätzt. Die tatsächliche Zahl der Schäden könnte noch viel höher liegen. Und dass Datenklau zwar nicht zu einem unmittelbaren Schaden führt, aber durchaus zum Verlust eines Wettbewerbsvorteils führen kann, ist in dieser Schätzung ebenfalls noch nicht berücksichtigt.

Die zweite Ungeheuerlichkeit, die Umfragen zufolge im Zusammenhang mit Cybercrime festgestellt wurde, ist die Vermutung, dass drei Viertel der Unternehmen in Deutschland in den vergangenen zwei Jahren schon einmal Ziel eines Hackerangriffs gewesen sein sollen. Das wären also mehr als zwei Millionen Angriffsversuche – oder um es deutlich zu sagen: Straftatbestände! Denn „Hacken“ ist kein Kavaliersdelikt.

Umso unverständlicher, dass Computerzentralen zwar im Hochsicherheitstrakt eines Unternehmens untergebracht sind, die Datenleitungen aber nur unzureichend geschützt sind. Denn das größte Sicherheitsrisiko ist noch immer der Mensch an seinem Arbeitsplatz. In jedem Unternehmen gibt es immer noch mindestens eine Person, die auf jeden Link in einer Email klicken würde.

Sicherheit ist kein statisches Qualitätsziel. Denn wenn sich Prozesse in einer Organisation ändern, ergeben sich schnell auch neue Sicherheitslücken. Dies gilt vor allem in agilen Unternehmen, in denen schnelle Entscheidungen, kurze Lernphasen, ständige Kurskorrekturen und hohes Entwicklungstempo zur Firmenkultur gehören. CIOs müssen bei der Planung ihrer Datacenter deshalb ständig die lebendige Organisation, die sie unterstützen sollen, im Blick haben. Doch damit sind viele mittelständische Unternehmen überfordert.

In der Cloud sind diese Sicherheitsaspekte hingegen hochgradig automatisiert. Microsoft Plattformen wie Azure, Office 365 und Dynamics 365 gehören, bestehen aus weltweit verteilten und skalierbaren Rechenzentren in denen verschiedene Dienste von IaaS bis SaaS angeboten werden. Für diese Rechenzentren und Dienste hat Microsoft über 53 verschiedene Zertifizierungen verschiedenster Organisationen erhalten. (Mehr dazu unter diesem Link: https://www.microsoft.com/de-de/trustcenter ) Dazu gehören globale Zertifizierungen wie die ISO27001, ISO27018 oder auch SOC3 genauso wie Industrie-spezifische Zertifizierungen wie HIPAA, HITRUST und MPAA oder die Berücksichtigung regionaler Anforderungen wie die EU Model Clauses, Canada Privacy Law oder das Grundschutzhandbuch und C5 Testat des deutschen Bundesamts für Sicherheit in der Informationstechnik.

Als führender Cloud-Anbieter gewährleistet Microsoft außerdem, dass bis zum Inkrafttreten der Datenschutzgrundverordnung (DSGVO bzw. GDPR General Data Protection Regulation) am 25. Mai 2018 die Microsoft Cloud-Dienste mit der DSGVO rechtskonform sein werden. Das schließt Produkte wie Office 365, Dynamics 365, Microsoft Azure, SQL Server, Enterprise Mobility + Security (EMS), Windows 10 und Microsoft 365 ein. Die Ziele der DSGVO stimmen mit den bereits bestehenden Zusagen von Microsoft im Hinblick auf Sicherheit, Datenschutz und Transparenz überein. Microsofts Rechenzentren nutzen weltweit einheitliche, geprüfte und bewährte Technologien und bieten die gleichen Service-Level und Sicherheitsstandards, zum Beispiel Datenverschlüsselungen nach aktuellen SSL/TLS-Protokollen. Die Microsoft Cloud bietet damit einen sicheren Weg zur DSGVO-Compliance.

Darüber hinaus investieren wir mit dem Microsoft Cyber Defense Operations Center jedes Jahr über eine Milliarde Dollar in die proaktive Abwehr von Sicherheitsbedrohungen. Dazu gehört auch, in allen Entwicklungs- und Betriebsprozessen von Anfang an das Thema Sicherheit und Datenschutz als Kernbestandteil zu berücksichtigen. Mit Hilfe von Machine Learning und künstlicher Intelligenz können wir selbst solche Angriffsszenarien frühzeitig zu erkennen, die noch nicht bekannt und beschrieben sind.

Es ist geradezu fahrlässig, auf eine solche Sicherheitsumgebung zu verzichten. Mit Microsoft Azure ist die Cloud nicht weniger zuverlässig als das firmeneigene Rechenzentrum. Im Gegenteil, die Cloud schützt vor Angriffen, die nur bei immensen Kosten mit hausinternen Sicherheitsfeatures abgewehrt werden könnten. Man geht ja auch nicht gegen eine komplexe Immunschwäche mit den Mitteln aus der Hausapotheke vor.

 

 

Microsoft Office 365 в образовании. Storytelling средствами Office 365. Примеры

$
0
0

С помощью Microsoft Office 365 можно создать любую визуально привлекательную цифровую историю (storytelling) и доставить обучаемому на разнообразные персональные устройства в удобной для него форме, что способствует еще большей степени вовлечения обучаемых в процесс обучения.

Что вы будете знать и уметь после прочтения этой статьи?

  • Как создать цифровые истории в Microsoft Office 365 с помощью конкретных сервисов?
  • В чем заключаются преимущества интеграции сервисов и приложений Microsoft Office и Office 365 в ходе создания цифровых историй?

Человек воспринимает визуальную информацию во много раз быстрее, чем текст. Изображения вызывают ассоциации у людей. Поэтому всего лишь одно изображение часто может рассказать больше, чем сотни слов.  Создание образов для каждой фразы позволяет запомнить смысл и всю историю без особых усилий [1].

Сценарий 1. Создание storytelling средствами OneNote Online

В каждой хорошей цифровой истории легко различить структуру:

- Вступление, как правило – короткое.

- Развитие события. Раскрывается сюжетная линия.

- Кульминация. Получение ответа на вопрос.

В истории должен быть персонаж - герой или герои истории. У всех персонажей должны быть свои особенности характера, которые будут отражаться в рассказанных событиях. Главный герой обязательно должен вызывать симпатию.
История интересна лишь тогда, когда слушатель может ей сопереживать, подсознательно представляя себя на месте персонажа. То есть изложение фактов — это ещё не история [1].

В этом сценарии остановимся на варианте подбора персонажа средствами OneNote Online для конкретного сценария цифровой истории:

Пояснения к схеме:

  1. Персонаж выбран из «Наклейки» OneNote Online. Здесь их достаточно большой выбор. На основе наклеек можно создать визуально наглядную цифровую историю.
  2. После выбора в соответствии со сценарием рассказа разных изображений персонажа (в соответствии со сценарием) размещаем краткие текстовые пояснения для тех пользователей, кто будет просматривать историю дистанционно. Попутно можно проговорить историю на запись звука, и пользователи (при желании) могут прослушать историю.
  3. Если рассказ истории происходит в аудитории (или онлайн с помощью Skype для бизнеса), то можно воспользоваться инструментами рисования OneNote Online для выделения отдельных сцен и этапов истории при демонстрации на экране. В этом варианте можно добавлять другие персонажи цифровой истории из «Наклейки» OneNote Online в ходе рассказа.

Возможно, вы захотите создать более наглядные цифровые истории (storytelling) с помощью других сервисов Office 365 для создания более привлекательных историй.

Сценарий 2. Создание storytelling средствами Microsoft Sway

Интеграция сервисов и приложений Microsoft Office и Office 365 в ходе создания цифровых историй позволяет добиться разнообразия представления истории.

Продолжаем рассматривать сквозной пример:

- допустим, вы создали цифровую историю средствами OneNote Online;

- вам потребовалось сделать рассылку этой истории для внешних пользователей, не имеющих учетной записи Office 365 учебного заведения.

В этом случае надо воспользоваться Microsoft Sway и возможностью «Начать с документа», т.е. возможностью создания цифровой истории из файла Microsoft Office.

Известно, что Sway-историю можно создать из файла, например, Word. Поэтому преобразуем страницу OneNote Online в файл Word в следующей последовательности [2]:

- Запустим приложение OneNote на персональном устройстве;

- синхронизируем записную книжку OneNote Online с приложением;

- экспортируем нужную страницу в файл Word из приложения OneNote («Файл»-«Экспорт»-«Документ Word»).

- преобразуем файл Word в Sway (Sway – «Начать с документа»-файл Word).

- добавим в заголовок истории дополнительное изображение, в точках фокусировки выберем «Показывать изображение целиком». Образец шуточной цифровой истории про вариант входа в Office 365, созданной средствами Sway можно просмотреть по ссылке в Microsoft Sway https://sway.com/2ZK2dzPxSJCmEVkX.

Сценарий 3. Создание storytelling средствами PowerPoint и Microsoft Stream

Динамические изображения более привлекательны, чем статичные.
Если к созданным вами историям добавить еще и видеоряд изображений [1], фиксирующих внимание зрителей на основных моментах в событиях истории, то ее эффективность увеличится многократно.
В этом случае можно воспользоваться возможностью создания из информации файла Word презентации PowerPoint с последующим преобразованием в видеоролик и публикацией видео, например, в Microsoft Stream [2,3].

Ссылка на видео из Microsoft Stream может быть использована, например, в Sway.

Для отображения красочных персонажей могут быть привлечены и объекты 3D [4].

Использованные источники:

  1. Microsoft Office 365 в образовании. Storytelling средствами Office 365. Введение https://vedenev.livejournal.com/22671.html
  2. Microsoft Office 365 в образовании. Организация обучения с помощью потокового видео Microsoft Stream http://vedenev.livejournal.com/17405.html
  3. Microsoft Office 365 в образовании. Миграция видео из Office Mix в Microsoft Stream https://vedenev.livejournal.com/21388.html
  4. Microsoft Office 365 в образовании. 3D, Windows 10 и OneDrive Office 365 https://vedenev.livejournal.com/19699.html

Автор статьи - Виталий Веденев.


2018 Microsoft Azure Community Study Groups

$
0
0

Interested in earning your Microsoft Azure MCSA, MCSD, or MCSE Certification? Need to study in a way that compliments your busy schedule? The Microsoft Azure Community Study Group is what you’ve been looking for!

Microsoft is hosting a community-based study group that helps you prepare for the Microsoft Azure Certification exams. Each study groups lasts 8-12 weeks depending on the number of exam objectives, and each week you'll have self-study homework to complete at your own pace in preparation for our calls on Friday. We'll meet to discuss specific exam objectives where you can interact with Microsoft experts and other students. During the week ask questions in our Yammer group so that your growth stays on track during this fast-paced 300 level learning environment. What a great way to learn!

Registration

Registration is now open for the following study groups:

Exam

Registration Link

Dates

70-532: Developing Microsoft Azure Solutions

https://aka.ms/532asg

March 23 – May 24, 2018

70-533: Implementing Microsoft Azure Infrastructure Solutions

https://aka.ms/533asg

January 12 – April 13, 2018

70-535: Architecting Microsoft Azure Solutions

https://aka.ms/535asg

January 12 – May 24, 2018

70-483: Programing in C#

https://aka.ms/483asg

January 12 – March 2, 2018

70-486: Developing ASP.NET MVC Web Applications

https://aka.ms/486asg

January 12 – March 23, 2018

70-487: Developing Microsoft Azure and Web Services

https://aka.ms/487asg

March 9 – May 11, 2018

Seating is *very limited* to this event series so please register as soon as possible. Once your registration is complete, join our private Yammer Group where we encourage you to interact with the other students in the class.

Thank you for your interest in building your knowledge and pursuing a Microsoft Azure Certification. We look forward to seeing you online!

SDeming 2017  Steve

Simple PowerShell Network Capture Tool

$
0
0

Hello all. Jacob Lavender here again for the Ask PFE Platforms team to share with you a little sample tool that I've put together to help with performing network captures. This all started when I was attempting to develop an effective method to perform network traces within an air gapped network. My solution had to allow me to use all native functionality of Windows without access to any network capture tools such as Message Analyzer, NETMON, or Wireshark. In addition, I'd need to be able collect the trace files into a single location and move them to another network for analysis.

Well, I know the commands. The challenge is building a solution that junior admins can use easily. Several weeks later I found the need for it again with another customer supporting Office 365. This process resulted in the tool discussed in this post.

Time and time again, it seems that we've spent a great deal of effort on the subject of network captures. Why? Because one of the first questions a PFE is going to ask you when you troubleshoot an issue is whether you have network captures. Same is true when you go through support via other channels. We always want them, seem to never get enough of them, and often they are not fun to get, especially when dealing with multiple end points.

So, let's briefly outline what we're going to cover in this discussion:

Topic #1: How to get the tool.

Topic #2: Purpose of the tool.

Topic #3: Requirements of the tool.

Topic #4: How to use the tool.

Topic #5: Limitations of the tool.

Topic #6: How can I customize the tool?

Topic #7: References and recommendations for additional reading.

Compatible Operating Systems:

  • Windows 7 SP1
  • Windows 8
  • Windows 10
  • Windows Server 2008 R2
  • Windows Server 2012 R2
  • Windows Server 2016

Topic #1: Where can I get this tool?

https://gallery.technet.microsoft.com/Remote-Network-Capture-8fa747ba

Topic #2: What is the purpose of this tool as opposed to other tools available?

This certainly should be the first question. This tool is focused toward delivering an easy to understand approach to obtaining network captures on remote machines utilizing PowerShell and PowerShell Remoting.

I often encounter scenarios where utilizing an application such as Message Analyzer, NETMON, or Wireshark to conduct network captures is not an option. Much of the time this is due to security restrictions which make it very difficult to get approval to utilize these tools on the network. Alternatively, it could be due to the fact that the issue is with an end user workstation who might be located thousands of miles from you and loading a network capture utility on that end point makes ZERO sense, much less trying to walk an end user through using it. Now before we go too much further, both Message Analyzer and Wireshark can help on these fronts. So if those are available to you, I'd recommend you look into them, but of course only after you've read my entire post.

Due to this, it is ideal to have an effective method to execute the built-in utilities of Windows. Therein lies NetEventSession and NETSH TRACE. Both of these have been well documented. I'll point out some items within Topic #7.

The specific target gaps this tool is focused toward:

  • A simple, easy to utilize tool which can be executed easily by junior staff up to principle staff.
  • A means by which security staff can see and know the underlying code thereby establishing confidence in its intent.
  • A lite weight utility which can be moved in the form of a text file.

With that said, this tool is not meant to replace functionality which is found in any established tool. Rather it is intended to provide support in scenarios where those tools are not available to the administrator.

Topic #3: What are the requirements to utilize this tool?

  1. An account with administrator rights on the target machine(s).
  2. An established file share on the network which is accessible by both
    1. The workstation the tool is executed from, and
    2. The target machine where the trace is conducted
  3. Microsoft Message Analyzer to open and view the ETL file(s) generated during the trace process.
    1. Message Analyzer does not have to be within the environment the traces were conducted in. Instead, the trace files can be moved to a workstation with Message Analyzer installed.
  4. Remote Management Enabled:
    1. winrm quickconfig
    2. GPO:
      https://www.techrepublic.com/article/how-to-enable-powershell-remoting-via-group-policy/

Note: Technically, we don't have to have Message Analyzer or any other tool to search within the ETL file and find data. However, to do so, you must have an advanced understanding of what you're looking for. Take a better look at Ed Wilson's great post
from the Hey, Scripting Guy! Blog:

https://blogs.technet.microsoft.com/heyscriptingguy/2015/10/14/packet-sniffing-with-powershell-looking-at-messages/

Topic #4: How do I use this tool?

Fortunately, this is not too difficult. First, ensure that the requirements to execute this tool have been met. Once you have the tool placed on the machine you plan to execute from (not the target computer), execute the PS1 file.

PFE Pro Tip: I prefer to load the file with Windows PowerShell ISE (or your preferred scripting environment).

Note: You do not have to run the tool as an administrator. Rather, the credentials supplied when you execute the tool must be an administrator on the target computer.

Additional Note: The tool is built utilizing functions as opposed to a long script. This was intentional as to allow the samples within the tool to be transported to other scripts for further use – just easier for me. While I present the use of the tool, I'll also discuss the underlying functions.

Now, that I have the tool loaded with ISE, let's see what it looks like.

  1. The first screen we will see is the legal disclaimer. These are always the best. I look forward to executing tools and programs just for the legal disclaimers. In my case, I'm going to accept. I will warn you that if you don't accept, then the tool will exit. I'm sure you're shocked.

  1. Ok, now to the good stuff. Behind the scenes the tool is going to clear any stored credentials within the variable $credentials. If you have anything stored in that variable within the same run space as this script, buckle up. You're going loose it. Just FYI.
  2. Next, the tool is now going to ask you for the credentials you wish to use against the target computer. Once you supply the credentials, the tool is going to validate that the credentials provided are not null, and if they are not, it will test their validity with a simple Get-ADDomain query. If these tests fail, the tool will wag the finger of shame at you.

  1. After supplying the credentials, we will be asked to supply a file share to move the capture files.

Note: The file share must be accessible from both the local client and the target computers. Here is why:

  • The tool is going to validate that the path you provided is available on the network. I'm assuming that after the capture is complete you will want to have access to the files. However, if the local machine is unable to validate the path, it will give you the option to force the use of the path.
  • Second, the tool is going to attempt to validate the file share path on the target computer. If the path is not accessible by that computer, it will give you the option to update the path. If you do not update the path it will leave a copy of the trace files on the target computer.
  1. Next, we will specify the target machine. Once you specify the machine, the tool will validate this machine with DNS by performing a query. If the query fails, you will have to correct the machine. The assumption is that if the query fails, the machine won't be accessible by FQDN (probably a safe assumption unless you're using a hosts file, which is outside the scope of this guide).

  1. Next, we will specify for how long we want the network capture to run. The value is in seconds.

Note: As stated by the tool, capture files can take up a great deal of space. However, the defaults within the tool are not very large.

You can customize the values of the network captures. The commands are located within the Start-NETSH and Start-Event functions.

For the purpose of this tool, I utilized the defaults with NO customization.

  1. Now, once we hit enter here, the tool is going to setup a PowerShell session with the target machine. In the background, there are a few functions its doing:
  • It establishes a PSSession.
  • It establishes the boot volume drive letter.
  • It sets a working path of <bootvolume>:TEMPTracefiles. If this path does not exist, it creates it.
  1. Next, we must specify a drive letter to use for mounting the network share (from Step 4). State any drive letter you want that isn't already in use.

Now, you might be asking why are we mounting a drive letter instead of using the Copy-Item command to the network path. Yeah, I tried that without thinking about it and got a big giant ACCESS DENIED. This is due to the fact that we can't double-hop with our credentials. Kerberos steps in and screams HALT! HALT WITH YOUR DOUBLE-HOP COMMAND!

Great article discussing this problem:

https://blogs.technet.microsoft.com/ashleymcglone/2016/08/30/powershell-remoting-kerberos-double-hop-solved-securely/

If you read the article, you'll see there are multiple ways to address this. I opted for the simple path of just mounting the network share as a drive letter. Simple. Easy. Can be used again without special configuration of computers, servers, or objects in AD. Keep it simple, right? Additionally, we want to minimize any special configuration of systems to accomplish this.

Now, again in the background the tool is performing a little extra logic:

  • It first validates that a drive is not already mounted with the network path provided from Step 4. That would be silly to do it twice.
  • Next, once you provide a drive letter, it validates that you didn't select one already in use.

Great. Now to the really good stuff.

  1. Our next screen presents us with the option to select the capture method we wish to use. Both have advantages and disadvantages. See the references section for details on these. Really, you should read those articles before selecting which capture method if you are not already familiar with them.

For this example, I'm selecting N for NETSH TRACE. NETSH TRACE provides a CAB file by default which I'll show you in Step 10.

Again, we have some behind the scenes logic happening.

Windows 7 and Windows Server 2008 R2 do not have the NetEventSession option available. So, the utility is going to establish what version of Windows the target computer is. If the computer is either Win7 or W2K8R2 it will not allow you to use NetEventSession. It will force the use of NETSH TRACE upon you.

NOTE: Also note that the utility is going to provide a report to you at the end of execution. Within that report it includes the running processes on the target computer.

Why?

Well, one of my favorite features of NETMON and Message Analyzer is the conversation tree. I like to know which of my applications are talking and to who. This is performed on the backend by the application to map PIDS to executables. Well, the capture file might not tell me the executable, but it does give me the PID. So, by looking at the report I can identify which PID to focus on and then use that when looking at the network trace file in Message Analyzer. Yay.

  1. Ok, as soon as we selected which capture method we were going to use, the tool executes the capture on the remote computer and it runs the capture for the length of time previously specified.

As you can see, it states the location. On the target computer we can even see the temporary files which are put in place for the capture:

Once the specified time is reached, the utility sends a stop command to the target computer to end the network capture:

NOTE: In the event that the utility is disconnected from the target computer prior to the stop command being issued, you can issue the commands locally at the target computer itself:

  • NETSH TRACE: netsh trace stop
  • PowerShell NetEventSession: Get-NetEventSession | Stop-NetEventSession

Finally, the tool will move the files used for the trace to the specified network share, and then remove them from the target computer.

  1. Next, we see that the tool completed its network trace and has placed a report for us in the C:Temp directory on the local machine we ran the tool from.

If we open that report file, we're going to be presented with this (there are more than two processes within the actual report) :

  1. Finally, we are now set to utilize the ETL files as necessary. In my case, I've opened an ETL that was generated on a Windows Server 2008 R2 computer using NETSH TRACE, and I'm looking at the LSASS.EXE process. 100 extra points if you can identify what this process is responsible for.

And finally, what's in that CAB file? Lots of goodies. You're going to want to explore that to better understand all the extra information which is provided about the system from this file.

Topic #5: What are the limitation of the tool?

  1. The tool, at present, can only target a single computer at a time. If you need to target multiple machines, you will need to run a separate instance for each. (Multiple PowerShell Sessions)
    1. I would recommend getting each instance to the point of executing the trace, and then do them all at the same time if you are attempting to coordinate a trace amongst several machines.
  2. I'm hoping to release a new version in the future which has the correct arrays and foreach loops built. We're just not there yet.
  1. The variables within the script utilize memory space within the script. They are not set to global. However, I haven't tested this scenario in depth so I would recommend giving that a test prior to trying against production machines.
  • Again, the tool is not meant to replace any other well-established application. Instead, this tool is meant only to fill a niche. You will have to evaluate the best suitable option for your purposes.

 

  • The NETSH TRACE and NetEventSession have not been customized. This was intentional. I highly recommend that you read some of the additional content found in Topic #6 regarding the scenarios and advanced configuration options available within these commands.

 

Topic #6: How can I customize the tool?

Well, we do need to address some customization options. To do so, you simply need to modify the command invoked against the target computer within the trace type's respective function. The function names are called out below.

NETSH TRACE Customization

Function: Start-NETSH

First, let's start with NETSH TRACE. Yong Rhee has a great article discussing some of the functionality within NETSH TRACE, specifically he uses scenarios:

https://blogs.technet.microsoft.com/yongrhee/2012/12/01/network-tracing-packet-sniffing-built-in-to-windows-server-2008-r2-and-windows-server-2012/

Using NETSH to Manages Traces:
https://msdn.microsoft.com/en-us/library/windows/desktop/dd569142(v=vs.85).aspx

Let's look at some of the built-in scenarios. To do so, execute netsh trace show scenarios:

Next, we can view some of the configuration of the providers within the scenarios using netsh trace show scenario <scenario name>, such as netsh trace show scenario LAN:

From this, we can see that one of the providers is Microsoft-Windows-L2NACP, which is currently configured to event logging level (4), Informational. Well, what if I wanted to configure that to be higher or lower. I can customize the NETSH TRACE command to accommodate this:

netsh trace start Scenario=Lan Provider=Microsoft-Windows-L2NACP Level=5 Capture=Yes TraceFile=$tracefile

This would increase the logging level to (5), Verbose:

Note: This is just one sample of how the NETSH TRACE option within the tool can be customized. There are plenty of other options as well. I strongly recommend that you review Netsh Commands for Network Trace:

https://technet.microsoft.com/en-us/library/jj129382(v=ws.11).aspx

NetEventSession Customization

Function: Start-NetEvent

Fundamentally, this is going to be the same as customizing NETSH TRACE. You simply have to know what you're looking for. In this case, we are going to focus on two aspects.

Configuring the NetEventSession: This overall is simple. As a whole we're not going to change too much on this. I'd recommend reviewing the New-NetEventSession documentation:

https://docs.microsoft.com/en-us/powershell/module/neteventpacketcapture/new-neteventsession?view=win10-ps

Now, the real meat of the capture. The NetEventProvider. The default used natively within the tool is the Microsoft-Windows-TCPIP provider. However, there are quite a few others available. You may want to output to a file as there will be several.

From PowerShell, execute:

Get-NetEventProvider -ShowInstalled

What you should notice is that the providers are all set with a default configuration. You can adjust these as necessary as well using:

Set-NetEventProvider

https://docs.microsoft.com/en-us/powershell/module/neteventpacketcapture/set-neteventprovider?view=win10-ps

https://technet.microsoft.com/en-us/library/dn268515(v=wps.630).aspx

By adding an additional Invoke-Command line within the Start-NetEvent function, you can easily customize the provider(s) which you wish to use within the network capture session.

Customization Conclusion: For both NETSH TRACE and NetEventSession, I would recommend making adjustments to the commands locally on a test machine and validating the results prior to executing against a remote machine. Once you know the command syntax is correct and the output is what you desire then incorporate that customization back into the tool as necessary.

Topic #7: References and Recommendations for Additional Reading:

  1. Learning how to use Message Analyzer:

 

Introduction to Network Trace Analysis Using Microsoft Message Analyzer:

  1. Part 2: https://blogs.technet.microsoft.com/askpfeplat/2014/10/12/introduction-to-network-trace-analysis-using-microsoft-message-analyzer-part-2/
  1. Michael Rendino's two posts:
    1. Basic Network Capture Methods: https://blogs.technet.microsoft.com/askpfeplat/2016/12/27/basic-network-capture-methods/
    2. Network Capture Best Practices: https://blogs.technet.microsoft.com/askpfeplat/2017/04/04/network-capture-best-practices/
  2. Victor Zapata's post on Leveraging Windows Native Functionality to Capture Network Traces Remotely:
  3. A note on this post. It includes some sample material on running traces against multiple machines at once. I'd recommend exploring this a little.

 

Infrastructure + Security: Noteworthy News (December, 2017-Part 1)

$
0
0

Hello there! Stanislav Belov here to bring you the next issue of the Infrastructure + Security: Noteworthy News series!  

As a reminder, the Noteworthy News series covers various areas, to include interesting news, announcements, links, tips and tricks from Windows, Azure, and Security worlds on a monthly basis. Enjoy! 

Microsoft Azure
Transforming your VMware environment with Microsoft Azure

Microsoft on November 21, 2017, announced new services to facilitate your VMware migration to Azure.

  • On November 27, 2017, Azure Migrate, a free service, will be broadly available to all Azure customers. Azure Migrate can discover your on-premises VMware-based applications without requiring any changes to your VMware environment.
  • Integrate VMware workloads with Azure services.
  • Host VMware infrastructure with VMware virtualization on Azure.
Free e-book download: Enterprise Cloud Strategy
In the second edition of the Enterprise Cloud Strategy e-book, we've taken the essential information for how to establish a strategy and execute your enterprise cloud migration and put it all in one place. This valuable resource for IT and business leaders provides a comprehensive look at moving to the cloud, as well as specific guidance on topics like prioritizing app migration, working with stakeholders, and cloud architectural blueprints. Download now.
Azure Hybrid Benefit for Windows Server
For customers with Software Assurance, Azure Hybrid Benefit for Windows Server allows you to use your on-premises Windows Server licenses and run Windows virtual machines on Azure at a reduced cost. You can use Azure Hybrid Benefit for Windows Server to deploy new virtual machines from any Azure supported platform Windows Server image or Windows custom images. As long as the image doesn't come with additional software such as SQL Server or third-party marketplace images.
Azure Reserved VM Instances (RIs) are generally available for customers worldwide

Effective November, 16th.  Azure RIs enable you to reserve Virtual Machines on a one- or three-year term, and provide up to 72% cost savings versus pay-as-you-go prices.

Azure RIs give you price predictability and help improve your budgeting and forecasting. Azure RIs also provide unprecedented flexibility should your business needs change. We've made it easy to exchange your RIs and make changes such as region or VM family, and unlike other cloud providers, you can cancel Azure RIs at any time and get a refund.

Azure Interactives

Stay current with a constantly growing scope of Azure services and features. Learn how to manage and protect your Azure resources efficiently and how to solve common design challenges.

Azure AD Pass through authentication

Azure Active Directory (Azure AD) Pass-through Authentication allows your users to sign in to both on-premises and cloud-based applications using the same passwords. This feature provides your users a better experience - one less password to remember, and reduces IT helpdesk costs because your users are less likely to forget how to sign in. When users sign in using Azure AD, this feature validates users' passwords directly against your on-premises Active Directory.

Windows Server
Why use Storage Replica?
Storage Replica offers new disaster recovery and preparedness capabilities in Windows Server 2016 Datacenter Edition. For the first time, Windows Server offers the peace of mind of zero data loss, with the ability to synchronously protect data on different racks, floors, buildings, campuses, counties, and cities. After a disaster strikes, all data will exist elsewhere without any possibility of loss. The same applies before a disaster strikes; Storage Replica offers you the ability to switch workloads to safe locations prior to catastrophes when granted a few moments warning - again, with no data loss.

Storage Replica may allow you to decommission existing file replication systems such as DFS Replication that were pressed into duty as low-end disaster recovery solutions. While DFS Replication works well over extremely low bandwidth networks, its latency is very high - often measured in hours or days. This is caused by its requirement for files to close and its artificial throttles meant to prevent network congestion. With those design characteristics, the newest and hottest files in a DFS Replication replica are the least likely to replicate. Storage Replica operates below the file level and has none of these restrictions.

Windows Client
Announcing Windows 10 Insider Preview Build 17035 for PC

Microsoft on November 8, 2017, released Windows 10 Insider Preview Build 17035 for PC to Windows Insiders in the Fast ring and for those who opted in to Skip Ahead. The new build features an ability to mute a tab that is playing media in Microsoft Edge, an ability to wirelessly share files and URLs to nearby PCs using the Near Share feature, improvements to Windows Update, and more.

Move away from passwords, deploy Windows Hello. Today!

Since Windows 10 originally released we have continued to make significant investments to Windows Hello for Business, making it easier to deploy and easier to use, and we are seeing strong momentum with adoption and usage of Windows Hello. As we shared at Ignite 2017 conference, Windows Hello is being used by over 37 million users, and more than 200 commercial customers have started deployments of Windows Hello for Business. As many would expect, Microsoft currently runs the world's largest production, with over 100,000 users; however, we are just one of many running at scale, the second largest having just reached 25,000 users.

Security
Stopping ransomware where it counts: Protecting your data with Controlled folder access

Windows Defender Exploit Guard is a new set of host intrusion prevention capabilities included with Windows 10 Fall Creators Update. One of its features, Controlled folder access, stops ransomware in its tracks by preventing unauthorized access to your important files.

Defending against ransomware using system design

Many of the risks associated with ransomware and worm malware can be alleviated through systems design. Referring to our now codified list of vulnerabilities, we know that our solution must:

  • Limit the number (and value) of potential targets that an infected machine can contact.
  • Limit exposure of reusable credentials that grant administrative authorization to potential victim machines.
  • Prevent infected identities from damaging or destroying data.
  • Limit unnecessary risk exposure to servers housing data.
Cybersecurity Reference Architecture & Strategies: How to Plan for and Implement a Cybersecurity Strategy

Planning and implementing a security strategy to protect a hybrid of on-premises and cloud assets against advanced cybersecurity threats is one of the greatest challenges facing information security organizations today.

Join Lex Thomas as he welcomes back Mark Simos to the show as they discuss how Microsoft has built a robust set of strategies and integrated capabilities to help you solve these challenges so that you can build a better understanding how to build an identity security perimeter around your assets.

Securing Domain Controllers Against Attack
Domain controllers provide the physical storage for the AD DS database, in addition to providing the services and data that allow enterprises to effectively manage their servers, workstations, users, and applications. If privileged access to a domain controller is obtained by a malicious user, that user can modify, corrupt, or destroy the AD DS database and, by extension, all of the systems and accounts that are managed by Active Directory. Because domain controllers can read from and write to anything in the AD DS database, compromise of a domain controller means that your Active Directory forest can never be considered trustworthy again unless you are able to recover using a known good backup and to close the gaps that allowed the compromise in the process.
Cybersecurity Reference Strategies (Video)
Explore recommended strategies from Microsoft, built based on lessons learned from protecting our customers, our hyper-scale cloud services, and our own IT environment. Get the details on important trends, critical success criteria, best approaches, and technical capabilities to make these strategies real. Discover key learnings and guidance on strategies that cover visibility and control of cloud and mobile assets, moving to an identity security perimeter, balancing preventive measures and detection/response capabilities, focusing on the "cost of attack," protecting information, and applying military lessons learned.
How Microsoft protects against identity compromise (Video)
Identity sits at the very center of the enterprise threat detection ecosystem. Proper identity and access management is critical to protecting an organization, especially in the midst of a digital transformation. This part three of the six-part Securing our Enterprise series where Chief Information Security Officer, Bret Arsenault shares how he and his team are managing identity compromise.
Vulnerabilities and Updates
#AVGater vulnerability does not affect Windows Defender Antivirus

On November 10, 2017, a vulnerability called #AVGater was discovered affecting some antivirus products. The vulnerability requires a non-administrator-level account to perform a restore of a quarantined file. Windows Defender Antivirus is not affected by this vulnerability.

Update 1711 for Configuration Manager Technical Preview Branch—Available Now!

Technical Preview Branch releases give you an opportunity to try out new Configuration Manager features in a test environment before they are made generally available. This month's new preview features include:

  • Improvements to the Run Task Sequence step
  • The option for user interaction when installing applications as system
SharePoint security fixes released with November 2017 PU and offered through Microsoft Update

The article identifies the KB articles of the security fixes released on November 14, 2017, for SharePoint 2010 Suite, SharePoint 2013 Suite, and SharePoint 2016 Suite.

November 2017 security update release

Microsoft on November 14, 2017, released security updates to provide additional protections against malicious attackers. By default, Windows 10 receives these updates automatically, and for customers running previous versions, Microsoft recommends that they turn on automatic updates as a best practice. More information about this month's security updates can be found in the Security Update Guide.

Support Lifecycle
The Azure AD admin experience in the classic Azure portal will retire on November 30, 2017. All Admin capabilities are available in the new Azure portal. The Azure Information Protection (or AIP, formerly Rights Management Service) admin experiences will also be retired in the Azure classic portal on November 30, but can be found here in the new Azure portal.
As Windows Azure Active Directory Sync (DirSync) and Azure AD Sync has reached their end of support on April 13, 2017 it is time for customers to upgrade to Azure AD Connect as DirSync will deprecate at the end of December 2017.  Azure AD Connect is the single solution replacing DirSync and Azure AD Sync and offers new functionality, feature enhancements, and support for new scenarios. Customers must upgrade to Azure AD Connect before January in order to continue to synchronize their on-premises identity data to Azure AD and Office 365. Beginning 31st of December Azure AD will no longer accept communications from Windows Azure Active Directory Sync ("DirSync") and Microsoft Azure Active Directory Sync ("Azure AD Sync").
Microsoft Premier Support News
Application whitelisting is a powerful defense against malware, including ransomware, and has been widely advocated by security experts. Users are often tricked into running malicious content which allows adversaries to infiltrate their network. ​Application whitelisting defines what is trusted by the IT organization and only allows those trusted applications to run. The Onboarding Accelerator - Implementation of Application Whitelisting consists of 3 structured phases that will help customers identify locations which are susceptible to malware and implement AppLocker whitelisting policies customized to their environment, increasing their protection against such attacks.
A new SQL Server - Migration from Oracle Assessment is available to help customers assess what they need to migrate an Oracle database to SQL Server. Also new, WorkshopPLUS - SQL Server: AlwaysOn Availability Groups and Failover Cluster Instances - Setup and Configuration which in-depth technical and architecture details of implementing SQL Server AlwaysOn Availability Group (AG) feature in Azure and on-premises.

Azure Stack теперь в России!

$
0
0

30 ноября состоялось ключевое и очень важное событие осени — форум Microsoft “Платформа цифрового бизнеса”!

Мероприятие было поистине масштабным: более 700 очных участников, свыше 40 000 онлайн-участников, более 40 фундаментальных и уникальных докладов по различным тематикам: соблюдение законов при бизнес-трансформации, применение современных облачных технологий, обсуждение оптимизации бизнеса и даже специальная дискуссия на тему развития blockchain и его влияние на будущее ИТ-бизнеса. 28 докладов, с глубоким техническим контентом, транслировалось только на онлайн-аудиторию. И одна из ключевых тем, конечно же, была Облачная платформа Microsoft – Azure.

Безусловной сенсацией мероприятия стал главный анонс – наличие Azure Stack в России.

Встречайте!

Если коротко Azure Stack — это расширение Azure, которое обеспечивает адаптивность и эффективность облачных вычислений в локальных дата-центрах, а также позволяет создавать современные приложения в гибридных облачных средах с необходимым уровнем гибкости, контроля и защиты.

Облачный рынок в России находится в стадии активного роста и способствует цифровой трансформации бизнеса. Azure Stack является не только решением для гибридного облака, но еще и платформой для гипер-конвергентной инфраструктуры, на которой возможно создание распределенных решений для промышленного Интернета вещей и индустриального blockchain. Появление Azure Stack в России является подтверждением востребованности этих сценариев бизнесом и должно способствовать их развитию.

Azure Stack является расширением Azure, однако, заказчик будет полностью контролировать как использование локальных мощностей, так и публичного облака Azure. Он сможет выбирать, где развернуть новый экземпляр виртуальной машины, где хранить данные – в своем или партнерском ЦОДе в России или в одном из 42 регионов Azure по всему миру.

С появлением Azure Stack возможностей соблюдения закона о персональных данных становится еще больше, и они еще проще в реализации. Azure Stack устанавливается в ЦОДе самого заказчика или у его партнера в России, и все действия с данными могут осуществляться именно в нем.

Об этом и других решениях смотрите на нашем сайте.
Все доклады доступны в записи.

http://msplatform.ru/

System Center 2012 R2 更新プログラム最新版がリリースされました!!

$
0
0

こんにちは、日本マイクロソフト System Center Support Team の益戸です。
公開が遅くなってしまいましたが、System Center の更新プログラムが先週公開されました。
既に、System Center 2012 R2 については、メインストリームを終了しておりますが、Transport Layer Security (TLS) protocol version 1.2 への対応としてリリースされております。

 

システム センター 2012 R2 の TLS 1.2 プロトコル サポートの展開ガイド
https://support.microsoft.com/ja-jp/kb/4055768

 

修正プログラムによっては、適用時にデータベースに対して自動的に更新が発生いたします。
その為、修正プログラムの適用に失敗した場合や、修正プログラムに致命的なエラーが発生した場合に備え、可能な限り、適用前にシステムおよび、データベースのバックアップの取得を実施ください。
System Center 製品については、明確にアンインストールを指示する場合を除き、適用した修正プログラムのアンインストール実施後に動作に問題が発生する場合がございます。

本修正プログラムは、Microsoft Update 経由で更新プログラムをダウンロードしてインストールすることができます。また、オフラインの環境では、Microsoft Update Catalog を通じてダウンロードしたパッケージを手動で適用することもできます。詳細な適用手順や、修正内容については、それぞれのリンクをご参照ください。

 

Description of Update Rollup 14 for Microsoft System Center 2012 R2
https://support.microsoft.com/ja-jp/kb/4043306

 

 

・Data Protection Manager (KB4043315)
https://support.microsoft.com/ja-jp/kb/4043315
* 更新プログラムの適用後にエージェントを更新する必要があります。

 

・Operations Manager (KB4024942)
https://support.microsoft.com/ja-jp/kb/4024942
* 更新プログラムの適用後に、レジストリの変更や、SQL の実行、管理パックのインポート等が必要です。
また、アップデートの順番についても指定がございますので、ご注意ください。
* 適用時に SCOM 管理サーバーの再起動を求められる場合があります。

 

・Orchestrator (KB4047356)
https://support.microsoft.com/ja-jp/kb/4047356
* 更新プログラム適用の際に、前提条件等をご確認ください。

 

・Service Manager (KB4024037)
https://support.microsoft.com/ja-jp/kb/4024037
* インストール時に他のコンポーネントの関連性にご注意ください。

 

・Virtual Machine Manager (KB4041077)
https://support.microsoft.com/ja-jp/kb/4041077
* 更新プログラム適用後にホストのエージェントを更新する必要があります。
* 適用時に SCVMM サーバーの再起動を求められる場合があります。

 

Join the US SMB Partner Insider call on Wednesday, December 6, 2017

$
0
0

TimTetrickPhoto

Tim Tetrick

 

Join the Microsoft US team for the December SMB Partner Insider call this Wednesday, December 6, 2017 where you’ll get valuable, actionable information to help your Microsoft business grow. Plus, registration is open for the January through June Insider calls!

The December agenda will cover:

  • Insider Scoop: Covering events, training, offers in market, marketing campaign content and more
  • Technical Demo: Getting started with Microsoft 365
  • Cloud Enablement Desk: Learn about this resource that helps partners build and accelerate their Microsoft practice

STAY IN THE KNOW

We look forward to you joining us on the December 6 Partner Insider call!

ESE Deep Dive: Part 1: The Anatomy of an ESE database

$
0
0

hi!

Get your crash helmets on and strap into your seatbelts for a JET engine / ESE database special...

This is Linda Taylor, Senior AD Escalation Engineer from the UK here again. And WAIT...... I also somehow managed to persuade Brett Shirley to join me in this post. Brett is a Principal Software Engineer in the ESE Development team so you can be sure the information in this post is going to be deep and confusing but really interesting and useful and the kind you cannot find anywhere else :- )
BTW, Brett used to write blogs before he grew up and got very busy. And just for fun, you might find this old  “Brett” classic entertaining. I have never forgotten it. :- )
Back to today's post...this will be a rather more grown up post, although we will talk about DITs but in a very scientific fashion.

In this post, we will start from the ground up and dive deep into the overall file format of an ESE database file including practical skills with esentutl such as how to look at raw database pages. And as the title suggests this is Part1 so there will be more!

What is an ESE database?

Let’s start basic. The Extensible Storage Engine (ESE), also known as JET Blue, is a database engine from Microsoft that does not speak SQL. And Brett also says … For those with a historical bent, or from academia, and remember ‘before SQL’ instead of ‘NoSQL’ ESE is modelled after the ISAMs (indexed sequential access method) that were vogue in the mid-70s. ;-p
If you work with Active Directory (which you must do if you are reading this post 🙂 then you will (I hope!) know that it uses an ESE database. The respective binary being, esent.dll (or Brett loves exchange, it's ese.dll for the Exchange Server install). Applications like active directory are all ESE clients and use the JET APIs to access the ESE database.

1

This post will dive deep into the Blue parts above. The ESE side of things. AD is one huge client of ESE, but there are many other Windows components which use an ESE database (and non-Microsoft software too), so your knowledge in this area is actually very applicable for those other areas. Some examples are below:

2

Tools

There are several built-in command line tools for looking into an ESE database and related files. 

  1. esentutl. This is a tool that ships in Windows Server by default for use with Active Directory, Certificate Authority and any other built in ESE databases.  This is what we will be using in this post and can be used to look at any ESE database.

  1. eseutil. This is the Exchange version of the same and gets installed typically in the MicrosoftExchangeV15Bin sub-directory of the Program Files directory.

  1. ntdsutil. Is a tool specifically for managing an AD or ADLDS databases and cannot be used with generic ESE databases (such as the one produced by Certificate Authority service).  This is installed by default when you add the AD DS or ADLDS role.

For read operations such as dumping file or log headers it doesn’t matter which tool you use. But for operations which write to the database you MUST use the matching tool for the application and version (for instance it is not safe to run esentutl /r from Windows Server 2016 on a Windows Server 2008 DB). Further throughout this article if you are looking at an Exchange database instead, you should use eseutil.exe instead of esentutl.exe. For AD and ADLDS always use ntdsutil or esentutl. They have different capabilities, so I use a mixture of both. And Brett says that If you think you can NOT keep the read operations straight from the write operations, play it safe and match the versions and application.

During this post, we will use an AD database as our victim example. We may use other ones, like ADLDS for variety in later posts.

Database logical format - Tables

Let’s start with the logical format. From a logical perspective, an ESE database is a set of tables which have rows and columns and indices.

Below is a visual of the list of tables from an AD database in Windows Server 2016. Different ESE databases will have different table names and use those tables in their own ways.

3

In this post, we won’t go into the detail about the DNTs, PDNTs and how to analyze an AD database dump taken with LDP because this is AD specific and here we are going to look at ESE specific level. Also, there are other blogs and sources where this has already been explained. for example, here on AskPFEPlat. However, if such post is wanted, tell me and I will endeavor to write one!!

It is also worth noting that all ESE databases have a table called MSysObjects and MSysObjectsShadow which is a backup of MSysObjects. These are also known as “the catalog” of the database and they store metadata about client’s schema of the database – i.e.

  1. All the tables and their table names and where their associated B+ trees start in the database and other miscellaneous metadata.

  1. All the columns for each table and their names (of course), the type of data stored in them, and various schema constraints.

  1. All the indexes on the tables and their names, and where their associated B+ trees start in the database.

This is the boot-strap information for ESE to be able to service client requests for opening tables to eventually retrieve rows of data.

Database physical format

From a physical perspective, an ESE database is just a file on disk. It is a collection of fixed size pages arranged into B+ tree structures. Every database has its page size stamped in the header (and it can vary between different clients, AD uses 8 KB). At a high level it looks like this:

4

The first “page” is the Header (H).

The second “page” is a Shadow Header (SH) which is a copy of the header.

However, in ESE “page number” (also frequently abbreviated “pgno”) has a very specific meaning (and often shows up in ESE events) and the first NUMBERED page of the actual database is page number / pgno 1 but is actually the third “page” (if you are counting from the beginning :-).

From here on out though, we will not consider the header and shadow header proper pages, and page number 1 will be third page, at byte offset = <page size> * 2 = 8192 * 2 (for AD databases).

If you don’t know the page size, you can dump the database header with esentutl /mh.

Here is a dump of the header for an NTDS.DIT file – the AD database:

1

The page size is the cbDbPage. AD and ADLDS uses a page size of 8k. Other databases use different page sizes.

A caveat is that to be able to do this, the database must not be in use. So, you’d have to stop the NTDS service on the DC or run esentutl on an offline copy of the database.

But the good news is that in WS2016 and above we can now dump a LIVE DB header with the /vss switch! The command you need would be "esentutl /mh ntds.dit /vss” (note: must be run as administrator).

All these numbered database pages logically are “owned” by various B+ trees where the actual data for the client is contained … and all these B+ trees have a “type of tree” and all of a tree’s pages have a “placement in the tree” flag (Root, or Leaf or implicitly Internal – if not root or leaf).

Ok, Brett, that was “proper” tree and page talk -  I think we need some pictures to show them...

Logically the ownership / containing relationship looks like this:

5

More about B+ Trees

The pages are in turn arranged into B+ Trees. Where top page is known as the ‘Root’ page and then the bottom pages are ‘Leaf’ pages where all the data is kept.  Something like this (note this particular example does not show ‘Internal’ B+ tree pages):

6

  • The upper / parent page has partial keys indicating that all entries with 4245 + A* can be found in pgno 13, and all entries with 4245 + E* can be found in pgno 14, etc.

  • Note this is a highly simplified representation of what ESE does … it’s a bit more complicated.

  • This is not specific to ESE; many database engines have either B trees or B+ trees as a fundamental arrangement of data in their database files.

The Different trees

You should know that there are different types of B+ trees inside the ESE database that are needed for different purposes. These are:

  1. Data / Primary Trees – hold the table’s primary records which are used to store data for regular (and small) column data.

  1. Long Value (LV) Trees – used to store long values. In other words, large chunks of data which don't fit into the primary record.

  1. Index trees – these are B+Trees used to store indexes.

  1. Space Trees – these are used to track what pages are owned and free / available as new pages for a given B+ tree.  Each of the previous three types of B+ Tree (Data, LV, and index), may (if the tree is large) have a set of two space trees associated with them.

Storing large records

Each Row of a table is limited to 8k (or whatever the page size is) in Active Directory and AD LDS. I.e. so each record has to fit into a single database page of 8k..but you are probably aware that you can fit a LOT more than 8k into an AD object or an exchange e-mail! So how do we store large records?

Well, we have different types of columns as illustrated below:

7

Tagged columns can be split out into what we call the Long Value Tree. So in the tagged column we store a simple 4 byte number that’s called a LID (Long Value ID) which then points to an entry in the LV tree. So we take the large piece of data, break it up into small chunks and prefix those with the key for the LID and the offset.

So, if every part of the record was a LID / pointer to a LV, then essentially we can fit 1300 LV pointers onto the 8k page. btw, this is what creates the 1300 attribute limit in AD. It’s all down to the ESE page size.

Now you can also start to see that when you are looking at a whole AD object you may read pages from various trees to get all the information about your object. For example, for a user with many attributes and group memberships you may have to get data from a page in the ”datatable” Primary tree + “datatable” LV tree + sd_table Primary tree + link_table Primary tree.

Index Trees

An index is used for a couple of purposes. Firstly, to make a list of the records in an intelligent order, such as by surname in an alphabetical order. And then secondly to also cut down the number of records which sometimes greatly helps speed up searches (especially when the ‘selectivity is high’ – meaning few entries match).

Below is a visual illustration (with the B+ trees turned on their side to make the diagram easier) of a primary index which is the DNT index in the AD Database – the Data Tree.  And a secondary index of dNSHostName. You can see that the secondary index only contains the records which has a dNSHostName populated. It is smaller.

8

You can also see that in the secondary index, the primary key is the data portion (the name) and then the data is the actual Key that links us back to the REAL record itself.

Inside a Database page

Each database page has a fixed header. And the header has a checksum as well as other information like how much free space is on that page and which B-tree it belongs to.

Then we have these things called TAGS (or nodes), which store the data.

A node can be many things, such as a record in a database table or an entry in an index.

The TAGS are actually out of order on the page, but order is established by the tag array at end.

  • TAG 0 = Page External Header

This contains variable sized special information on the page, depending upon the type of B-tree and type of page in B tree (space vs. regular tree, and root vs. leaf).

  • TAG 1,2,3, etc are all “nodes” or lines, and the order is tracked.

The key & data is specific to the B Tree type.

And TAG 1 is actually node 0!!! So here is a visual picture of what an ESE database page looks like:

9

It is possible to calculate this key if you have an object's primary key. In AD this is a DNT.

The formulae for that (if you are ever crazy enough to need it) would be:

  • Start with 0x7F, and if it is a signed INT append a 0x80000000 and then OR in the number

  • For example 4248 –> in hex 1098 –> as key 7F80001098 (note 5 bytes).

  • Note: Key buffer uses big endian, not little endian (like x86/amd64 arch).

  • If it was a 64-bit int, just insert zeros in the middle (9 byte key).

  • If it is an unsigned INT, start with 0x7F and just append the number.

  • Note: Long Value (LID) trees and ESE’s Space Trees (pgno) are special, no 0x7F (4 byte keys).

  • And finally other non-integers column types, such as String and Binary types, have a different more complicated formatting for keys.

Why is this useful? Because, for example you can take a DNT of an object and then calculate its key and then seek to its page using esentutl.exe dump page /m functionality and /k option.

The Nodes also look different (containing different data) depending on the ESE B+tree type. Below is an illustration of the different nodes in a Space tree, a Data Tree, a LV tree and an Index tree.

10

The green are the keys. The dark blue is data.

What does a REAL page look like?

You can use esentutl to dump pages of the database if you are investigating some corruption for example.

Before we can dump a page, we want to find a page of interest (picking a random page could give you just a blank page) … so first we need some info about the table schema, so to start you can dump all the tables and their associated root page numbers like this :

1_2

Note, we have findstring’d the output again to get a nice view of just all the tables and their pgnoFDP and objidFDP. Findstr.exe is case sensitive so use the exact format or use /i switch.

objidFDP identifies this table in the catalog metadata. When looking at a database page we can use its objidFDP to tell which table this page belongs to.

pgnoFDP is the page number of the Father Data Page – the very top page of that B+ tree, also known as the root page.  If you run esentutl /mm <dbname> on its own you will see a huge list of every table and B-tree (except internal “space” trees) including all the indexes.

So, in this example page 31 is the root page of the datatable here.

Dumping a page

You can dump a page with esentutl using /m and /p. Below is an example of dumping page 31 from the database - the root page of the “datatable” table as above.

2

3

The objidFDP is the number indicating which B-tree the page belongs to. And the cbFree tells us how much of this page is free. (cb = count of bytes). Each database page has a double header checksum – one ECC (Error Correcting Code) checksum for single bit data correction, and a higher fidelity XOR checksum to catch all other errors, including 3 or more bit errors that the ECC may not catch.  In addition, we compute a logged data checksum from the page data, but this is not stored in the header, and only utilized by the Exchange 2016 Database Divergence Detection feature.

You can see this is a root page and it has 3 nodes (4 TAGS – remember TAG1 is node 0 also known as line 0! 🙂 and it is nearly empty! (cbFree = 8092 bytes, so only 100 bytes used for these 3 nodes + page header + external header).

The objidFDP tells us which B-Tree this page belongs to.

And notice the PageFlushType, which is related to the JET Flush Map file we could talk about in another post later.

The nodes here point to pages lower down in the tree. And we could dump a next level page (pgno: 1438)....and we can see them getting deeper and more spread out with more nodes.

4

5

So you can see this page has 294 nodes! Which again all point to other pages. It is also a ParentOfLeaf meaning these pgno / page numbers actually point to leaf pages (with the final data on them).

Are you bored yet? Open-mouthed smile

Or are you enjoying this like a geek? either way, we are nearly done with the page internals and the tree climbing here. 

If you navigate more down, eventually you will get a page with some data on it like this for example, let's dump page 69 which TAG 6 is pointing to:

6

7

So this one has some data on it (as indicated by the “Leaf page” indicator under the fFlags). 

Finally, you can also dump the data - the contents of a node (ie TAG) with the /n switch like this:

8

Remember: The /n specifier takes a pgno : line or node specifier … this means that the :3 here, dumped TAG 4 from the previous screen.  And note that trying to dump “/n69:4” would actually fail.

This /n will dump all the raw data on the page along with the information of columns and their contents and types. The output also needs some translation because it gives us the columnID (711 in the above example) and not the attribute name in AD (or whatever your database may be). The application developer would then be able to translate those column IDs to some meaningful information. For AD and ADLDS, we can translate those to attribute names using the source code.

Finally, there really should be no need to do this in real life, other than in a situation where you are debugging a database problem. However, we hope this provided a good and ‘realistic’ demo to help understand and visualize the structure of an ESE database and how the data is stored inside it!

Stay tuned for more parts .... which Brett says will be significantly more useful to everyday administrators! 😉

The End!

Linda & Brett


Azure Functions v2 C# Script Async sample with SendGrid

$
0
0

I wanted to use Send Grid with Async method. However, all of the example on the Internet aim for Sync method. I'd like to introduce how to write it.

Note, this example is for Azure Functions V2.

Prerequisite

You need FunctionApp with V2 and SendGrid extension. After creating a FunctionApp then go to portal. Then you can change the Runtime version. For the SendGrid extension, click the "Integrate", then choose SendGrid as out put bindings. Then it ask you if you install the SendGrid extension. Just install it.

Code

Azure Functions V2 for the C# Script we can see a lot of changes. It has several change from V1.  SendGrid provide Mail class now it is SendGridMessage class. Also, the HttpRequestMessage truns into Microsoft.AspNetCore.Mvc.HttpRequest. Return value is from HttpResponseMessage into IActionResults .

I can't find any example for Async functions for C# Script for V2, however, I just make it Async and returns Task<IActionResults>.

 

run.csx

 

#r "Newtonsoft.Json"

#r "SendGrid"

using System.Net;

using Microsoft.AspNetCore.Mvc;

using Microsoft.Extensions.Primitives;

using Newtonsoft.Json; 

using SendGrid.Helpers.Mail; 

public async static Task<IActionResult> Run(HttpRequest req, IAsyncCollector<SendGridMessage> messages, TraceWriter log)

{

 log.Info("SendGrid message");

 string body; 

 var stream = req.Body;

 byte[] result = new byte[stream.Length];

 await stream.ReadAsync(result, 0, (int)stream.Length);

 body = System.Text.Encoding.UTF8.GetString(result);

 var message = new SendGridMessage();

 message.AddTo("your@email.com");

 message.AddContent("text/html", body);

 message.SetFrom("iot@alert.com");

 message.SetSubject("[Alert] IoT Hub Notrtification");

 await messages.AddAsync(message); 

 return (ActionResult)new OkObjectResult("The E-mail has been sent.");

}

function.json

{

 "bindings": [

 {

 "authLevel": "function",

 "name": "req",

 "type": "httpTrigger",

 "direction": "in"

 },

 {

 "name": "$return",

 "type": "http",

 "direction": "out"

 },

 {

 "type": "sendGrid",

 "name": "messages",

 "apiKey": "SendGridAttribute.ApiKey",

 "direction": "out"

 }

 ],

 "disabled": false

}
You can also specify the e-mail settings on the function.json like this.
{
 "bindings": [
 {
 "authLevel": "function",
 "name": "req",
 "type": "httpTrigger",
 "direction": "in"
 },
 {
 "name": "$return",
 "type": "http",
 "direction": "out"
 },
 {
 "type": "sendGrid",
 "name": "messages",
 "apiKey": "SendGridAttribute.ApiKey",
 "to": "some@e-mail.com",
 "from": "iot@alert.com",
 "subject": "IoT Humidity Alert",
 "direction": "out"
 }
 ],
 "disabled": false
}

Resource

TNWiki Article Spotlight – Creating and Deploying BizTalk HTTP Receive Locations using BTDF framework

$
0
0

Welcome to another Tuesday TNWiki Article Spotlight.

Today, in this blog post, we are going to discuss Creating and Deploying BizTalk HTTP Receive Locations using BTDF framework by .

This article explains in detail how to use "BTDF framework to create and deploy BizTalk HTTP receive locations".

What is BTDF, check below a short description about that.

BTDF is an open source deployment framework used for deploying BizTalk applications on local dev boxes as well as different environments. It provides many facilities that can be used to club together the task that require to be performed pre and post deployment of the BizTalk deployment e.g restarting of the concerned Host Instances, IIS reset etc. Another advantage of BTDF is that it is very flexible and can be configured to do a lot of tasks before, during and after the deployment of BizTalk application. All these tasks can be packaged up as a single msi file and can be installed on the target environment. It also provides facility to define the variables related to different environments in a spreadsheet which simplifies the task of manually maintaining the binding files for the BizTalk application for Multiple environment. These are some of the features of BTDF. BTDF has proven to be a very reliable tool for creating build msi for BizTalk. It is a necessary weapon in the arsenal of a professional working on BizTalk platform.

Mandar has explained the problem statement in his article below:

When a BizTalk application need to implement the HTTP adapter to receive messages to BizTalk, there are different steps that the developer/administrator needs to take care for the adapter to function properly.

  • Setting up the handling Mapping for the BizTalk HTTP receive location
  • Creating an IIS application and binding it to correct app pool
  • Creating receive port and receive locations
  • Configure the receive location to accept the messages

The process is long and missing a step can result into the rework. If New receive locations are to be configured to work with the HTTP protocol, the developer/administrator needs to go through the same procedure mentioned above.

There are other solutions to solve above problem statement, and Mandar already added those references but this is a different solution that he explained. I believe this article will be a great feast for all who is working on BizTalk deployment with BTDF and HTTP receive locations.

I really appreciate the efforts and time spent to write this article, hope you all enjoy reading his article.

See you all soon in another blog post. Please keep reading and contributing!

Best regards,
— Ninja [Kamlesh Kumar]

Microsoft 365相談センター

$
0
0

[提供: ソフトバンク コマース&サービス株式会社]

法人企業様のMicrosoft 365導入前のお悩みは、ソフトバンクC&Sが運営するMicrosoft 365相談センターにお問合わせください!専任スタッフが皆様のご相談にお答えします。

 

■ソフトバンク コマース&サービス株式会社がご提供するMicrosoft 365相談センターとは

「Windows OSのサポート終了が近い。今まで通りのライセンスで更新したほうが良い気がするけど、Microsoft 365も気になる。一体何が良いの?」
「Microsoft 365って本当に我が社にも必要?」
「Microsoft 365のプランの違いって何?どのプランが最適なのか教えて!」
「うちのセキュリティ要件に合うかどうか、こういう設定ができるかどうか知りたい」
「まずは導入してみたいから、お見積が欲しい」
「『働き方改革』って言われても何をどうすればよいのか困る…」
こんなお悩みを抱えていませんか?
そのお悩み、Microsoft 365相談センターにご相談ください!

ソフトバンク C&Sが運営するMicrosoft 365相談センターは、法人企業さま専用のMicrosoft 365導入前のご相談窓口です。ソフトバンク創業当時から積み上げた豊富なマイクロソフトの各種サービスとビジネスの知識と経験を活かして、Microsoft 365導入にあたって法人企業さまが抱える様々な疑問やお悩みに、専門スタッフが丁寧に回答いたします。
導入前の機能的な質問の他にも、最適プランのご提案や、導入費用のお見積、更に、お客さま1社1社異なる導入要件に沿えるよう、ITディストリビューターという立場を活かして、サードパーティ製のアドオンソリューションのご紹介・ご提案も行いながら、導入への障壁となる課題を解消し、最適でスムーズなMicrosoft 365の導入をご支援いたします。
※個人でのご利用に関するお問合わせや、購入後のお問合わせなど、ご質問・ご相談内容によっては回答できないケースもございます。あらかじめご了承ください。

Microsoft 365相談センターへのお問合わせは「Microsoft 365 相談」で検索!
webページからいつでもお問合せいただけます。

Microsoft 365相談センター https://licensecounter.jp/microsoft365/
窓口営業時間:平日9時~12時、13時~17時 ※土日祝、ソフトバンクC&Sが指定する休日を除く

 

 

Office ブログまとめ (2017 年 11月) 【12/5 更新】

$
0
0

office blog

Office Blogs や、元ネタの英語版の Office Blogs (英語) は、 新製品情報から、新機能や製品開発の背景まで、Microsoft Officeに関するさまざまな情報をお届けするブログです。ぜひブックマークして定期的にご参照ください!また、同時に日本の技術営業チームが現場で求められる情報を発信している MS Japan Office 365 Tech Blog もあわせてご覧ください。

 

≪最近の更新≫

 

 

Office 365 サブスクリプションをお持ちのお客様を対象とした 最新情報については、Office 2016Office for MacOffice Mobile for WindowsOffice for iPhone and iPadOffice on Android をご参照ください。

最近は、月末にその月の主な新機能のまとめが出ていますので、簡単に概要を把握するのに便利です。

 

また、英語になりますが、更新に関するビデオドキュメントが出ていますので、こちらもご覧ください。

 

過去のまとめを見るには、Office Blogs タグを参照してください。

製品についての最新情報まとめを見るには最新アップデートタグを参照してください。

 

 

年内のAzure Stack セミナーはPWCさんと実施する12/13 が最後かな

$
0
0

皆さん、こんにちは。

11月~12月1日にかけて、お付き合いのあるパートナー様に企画をしていただき、私はセッションを1つ担当するという形でAzure Stackのセミナーを実施させていただきました。

まずは、関係者の皆さん企画からご準備、当日のご対応までお疲れ様でした&ありがとうございました!

そして、セミナーにご参加いただいた皆様も本当にありがとうございました!

話をしている側から見た雰囲気では、ご参加いただいた多くの方が真剣に聞いていただいていたように思われ、パートナー様と進めてきたAzure Stackビジネスの準備が良い方向で進んでいると実感しました。

セッションやセミナーがご期待に添える内容だったかどうかも含め、これからのパートナーの皆さんとのビジネスの状況を確認しながら、できる限りのフォローをしていこうと思っています。

-----

さて、早いもので12月に入ってしまいました。

パートナー様担当として大手SIerさん数社と並行してビジネスを進めようとしている今、いろいろとやりたいこと・やらなければならない事が多いなかで、Azure Stackビジネスも来年に向けた準備をしておきたいと思っています。

その1つが、ポストのタイトルにも書いたPWCさんとのAzure Stackセミナーです。

先日のAzure Stack プレスリリースでも触れていたと思いますが、Azure Stackは単なる仮想化の置き換えだと思っていません。セルフサービス化が進む、アプリの作り方が変わる、自社内にあるにも関わらず課金が変わる、ITへの向き合い方が変わるなど、パブリッククラウドの台頭により変わった新しいIT像をお客様のデータセンターにもたらすことを考えています。

そのためには、少々上位レイヤーからのコンサルテーションが必要な場面も出てくるでしょうから、いくつかのコンサルティング系のITベンダーさんとの協業では、そこをカバーしていただきたいと思っています。

そして、今回セミナーを一緒にやらせていただくPWCさんもその中の1社で、セミナータイトルも少し違った感じになっております。

Digital Transfromation にそなえるクラウド基盤

私も、いきなりAzure Stackの話をするセッションではなく、MSのクラウドビジネスへのシフト、結果として非常に高度なITサービスに成長したAzure、そしてパブリッククラウド化が難しい場面でのAzure Stackの価値などをお話しさせていただく予定です。

あまりAzure Stackの説明に時間をさけないので、そこをどうしようかと悩んではいますが、お客様にご理解いただきたいメッセージをきちんとお伝えしたいと思ってます。

是非ご参加ください。

高添

Viewing all 36188 articles
Browse latest View live