Quantcast
Channel: TechNet Blogs
Viewing all 36188 articles
Browse latest View live

Microsoft Connect(); 2018: すべての開発者がより多くの成果を達成できるように

$
0
0

執筆者: Scott Guthrie (Executive Vice President, Cloud and Enterprise Group, Microsoft)

このポストは、2018 12 4 日に投稿されEmpowering every developer to achieve more at Microsoft Connect(); 2018 の翻訳です。

 

本日マイクロソフトは Connect(); 2018 (英語) を開催し、あらゆる開発者の皆様のお役に立つ新たなイノベーションを発表しました。私たちは今、ユビキタス コンピューティングの世界の実現にどんどん近付いていることを実感しています。テクノロジによって、あらゆる消費者、そして企業のエクスペリエンスが変革され続けているためです。

開発者の皆様にとって、AIIoT、サーバーレス コンピューティング、コンテナーなどのテクノロジを使用する機会は増え続けています。今回は、未来のアプリケーションを構築するにあたって、開発者の皆様にさらに多くの成果を達成していただけるように、マイクロソフトが取り組んでいる最新のイノベーションの一部をご紹介します。
 

あらゆる開発者のためのツール

マイクロソフトは、開発者によって、開発者のために作られた会社です。そのため、開発者が日々どのようなチャンスや課題に直面しているかをよく理解しています。本日も開発者の皆様にこれまで以上のイノベーションの実現と生産性向上に役立つ開発者向けツールと Azure サービスをご紹介します。

まず、Azure Machine Learning サービスの一般提供が開始 (英語) されました。このサービスは、開発者やデータ サイエンティストによる機械学習モデルの構築、トレーニング、デプロイメントを効率化するものです。Azure Machine Learning を使用すると、モデルの選択とチューニングを自動化し、機械学習の DevOps によって生産性を高め、モデルをワンクリックでデプロイすることができます。Azure Machine Learning サービスは、ツールに依存しない Python SDK を採用しており、任意のオープンソース フレームワークを選択した Python 環境で使用できます。

Visual Studio は、世界中の 1,200 万人以上の開発者によって、新規アプリケーションの構築や既存のアプリケーションの強化に使用されています。本日は、Visual Studio 2019 Preview と Visual Studio 2019 for Mac Preview (英語) のダウンロード提供が開始されました。AI を活用した IntelliSense IntelliCode、リファクタリング機能の拡張、デバッグの効率化といった多くの機能強化によって、コードの作成により集中しやすくなりました。また、Live Share や新しい GitHub プル リクエスト機能を使用すると、リアルタイムで共同作業を行うことができます。Azure を使用している場合は、コンテナーで最新化する場合も、サーバーレス テクノロジでクラウドネイティブ ソリューションを構築する場合も、これまで以上に多くのサポートを利用できます。

.NET Core 3 Preview (英語) の提供が開始され、Windows Presentation Foundation (WPF) および Windows フォーム アプリケーション フレームワークが .NET Core に追加されました。これにより、サイドバイサイド EXE や自己完結型 EXE による柔軟なデプロイメント、パフォーマンスの向上、XAML Islands による Windows フォームおよび WPF アプリケーションでのネイティブ Universal Windows Platform (UWP) コントロールの使用が可能になりました。サーバー側では、ASP.NET Core Razor コンポーネントを使用するコンポーザブル UI がサポートされました。これにより、.NET で初となるフルスタック Web 開発が提供されます。

Azure Cosmos DB は、クラウド ネイティブのデータ駆動型アプリケーションを構築するためのフルマネージド型のグローバル分散データベースです。Azure Cosmos DB では、NoSQL ワークロードがサポートされ、10 ミリ秒未満の低レイテンシと高可用性が保証されます。本日マイクロソフトは、Azure Cosmos DB の共有スループット (英語) の一般提供を開始しました。一般提供版では、最小エントリ数が従来のエントリ ポイントの 25 分の 1 に相当する 400 要求ユニット (月額 24 ドル) に引き下げられました。このため、複数の「Azure Cosmos DB コンテナー」を含むデータベースを作成している開発者にとって、Azure Cosmos DB はさらに利用しやすくなっています。
 

マイクロソフトによるオープンソースの取り組み

開発者による素晴らしいイノベーションの中心には、コミュニティの皆様がいます。つまり、オープンソースがとても重要だということです。マイクロソフトは、アイデアの構想から共同作業、デプロイメントまで、開発ライフサイクルの各段階で開発者を支援するべく取り組んでいます。今回の発表は、より多くのマイクロソフト製品をオープンソース化してコミュニティでの共同作業と協力を促進するだけでなく、他の開発者との共同作業にも積極的に取り組んでいることを表しています。

最新のコンテナー アプリケーションには、コンテナー、データベース、仮想マシンといった多様なコンポーネントが含まれていることが多く、さまざまな環境でアプリを簡単にパッケージ化して保守するための手段が必要になります。本日発表された Cloud Native Application Bundles (CNAB、英語) は、新しいオープンソースのパッケージ形式仕様で、Docker との緊密な協力によって作成され、HashiCorp Bitnami などでサポートされています。CNAB では、単一のインストール可能なファイルを使用して分散アプリケーションを管理できるほか、さまざまな環境でアプリケーション リソースを確実にプロビジョニングし、複数のツールセットを使用しなくても簡単にアプリケーション ライフサイクル管理を行うことができます。

1 年前には Virtual Kubelet (VK) が導入され、サーバーレスやエッジといったコンピューティング環境でコンテナーをデプロイ、管理できるように Kubernetes API を拡張する、プラグ可能なアーキテクチャが提供されました。それ以来、多数の VK プロバイダーが追加され、Azure Container InstancesAWS FargateAlibaba ECIAzure IoT Edge など、さまざまなサービスとの統合が可能になりました。本日マイクロソフトは、Virtual Kubelet プロジェクト (英語) Cloud Native Computing Foundation (CNCF) に無償で提供しました。CNCF 内で作業を進めることにより、コミュニティのさらなる参加とイノベーションを促進し、より多くの環境に Kubernetes オーケストレーションを統合することができます。

また、.NET コミュニティからの多数のご要望にお応えして、Windows Presentation Foundation (WPF、英語)Windows フォーム (英語)WinUI XAML ライブラリ (WinUI、英語) をオープンソース化しました。初回のコミットでは多数の名前空間と API が追加され、今後数か月のうちにさらなる追加が予定されています。これらのリポジトリへの皆様のご協力をお待ちしています。

テクノロジが利用しやすくなると、開発者は現在のプロジェクトに最適なソリューションを自由に選択できるようになります。本日マイクロソフトは、Azure Database for MariaDB (英語) の一般提供を開始しました。Azure Database for MariaDB は、MariaDB コミュニティのためのエンタープライズ対応のフルマネージド型サービスであり、組み込みの高可用性と柔軟なスケーリング、柔軟な料金体系が提供されます。
 

サーバーレスの民主化

今回、サーバーレス コンピューティングのメリットをあらゆるアプリのパターンで利用できるようになりました。イベント駆動型の関数を作成する場合でも、Kubernetes でオーケストレーションされたコンテナー ワークロードを実行する場合でも、任意のプラットフォームに実装された API を管理する場合でも、基盤となるインフラストラクチャを気にする必要がなくなります。

オープンソースの Virtual Kubelet テクノロジを利用した Azure Kubernetes Service (AKS) の仮想ノードのパブリック プレビュー (英語) により、サーバーレス Kubernetes を実現できます。この新機能を利用することで、追加のコンピューティング能力を数秒で柔軟にプロビジョニングできます。Azure ポータルで数回クリックするだけで仮想ノード機能を有効にして、AKS 環境でコンテナーに特化したエクスペリエンスの柔軟性と移植性というメリットを実現できます。追加のコンピューティング リソースを管理する手間はかかりません。

Azure Functions では、.NETJavaScriptJava など、任意の言語でサーバーレスのイベント駆動型アプリケーションを構築できます。今回さらに Python のサポートが Azure Functions (英語) に追加されました。Python をコードまたは Docker コンテナーとして使用して、Linux ベースの関数を作成できるほか、CLI Visual Studio などのローカル ツールを使用して、ビルド、デバッグ/テスト、発行といったエンドツーエンドの開発エクスペリエンスを実行できます。Python のサポートにより、機械学習と自動化のシナリオでサーバーレスのアプローチが可能になります。

以上は今回発表された新しいツールとサービスのごく一部ですので、その他の最新情報 (英語) もぜひご確認ください。また、Connect(); 2018 のインタラクティブなライブ コーディング セッションにもご参加ください。今すぐオンラインで参加 (英語) するか、オンデマンドで視聴 (英語) し、イベントの期間中に公開されるサンプル コード (英語) を確認して、ソーシャル メディア (#MSFTConnect) でご意見をお聞かせください。今後の開発にお役立ていただければ幸いです。

 


Tip o’ the Week 458 – Grabbing pictures from websites

$
0
0

clip_image001There are plenty of reasons why you might want to get the URL of a picture that is embedded on a web page, and some of them don’t even risk breaching the copyright of the image’s owner or page author!

Legitimate examples might include things like downloading a company logo from its website so you can include it in a PowerPoint slide; try going to just about any major company site and you’ll probably find it’s not straightforward to save the image file. Ditto all sorts of clever pages that might stop you simply saving the picture to your PC.

clip_image003clip_image005Normal behaviour is, mostly, to just right-click on an image and in Edge, you’ll be able to save the picture (or use Cortana to try to give you more details on the image, even trying to guess what’s in the image depending on how straightforward it is – it’s surprisingly good). Ditto, if you’re using Chrome, except you can search Google instead. Try the same on a company logo, and you may find you won’t get the option to save or search.

If you want to grab the actual URL for an image on a web page, the clip_image006foolproof way of getting it is to look at the source – if you don’t mind fishing through maybe a few thousand lines of HTML. It’s not too bad if the image is at the top of the page, but it could prove tedious if elsewhere. In Edge, an easier solution would be to right-click on the image and choose, Inspect element. You may need to press F12 to get these options in your right-click menu. Chrome has a similar thing, simply called Inspect, and can be invoked by CTRL-SHIFT-I.

The Inspect Element funciton in browsers is designed to help web page debugging; it’ll let a user or designer jump straight to the section of a web page’s source, and inspect or even modify the code behind the page.

clip_image008As an example, right-click on the logo on www.microsoft.com and Inspect Element. You’ll see the highlighted section is the bit where the logo sits on the page, and immediately next in the hierarchical representation of the page code, you’ll see the <img> tag, denoting that this pertains to the image itself.

Look for the src= part, double-click on it and you’ll see the URL of the image in an editable text box, meaning you can easily copy that to the clipboard and get ready to paste it wherever you need it clip_image010to go. Try pasting it into a new browser tab just to check that all you’re getting is the logo.

clip_image011

Using a search engine

Of course, there may be easier ways to get an image – using Bing or Google search, for example.

Bing is actually quite a bit better in this regard. When you click on an image in the results from Bing’s Image search, you’ll see a larger preview of the picture along with a few actions you can take – like jump to the originating page; search for other sizes of the same image; use Visual Search to run a query on just some selectable portion of the image; or simply just view it in the browser, thereby opening just that image and showing you the direct URL to it.

In the case of both Google and Bing, if you click on “Share”, then you’ll get a link to the search result of that image rather than the picture itself – so if your plan is to embed the image in another web page or upload it to some other place, then you’ll be frustrated.

clip_image012Another legitimate use of the original URL for a logo might be to change the icon in Teams – assuming you have permissions to Manage a team site (click the ellipsis to the right of the clip_image013name and if you’re suitably perm-ed up, when you click on the Manage Team option, you’ll see a little pencil icon on the logo if you hover over it. Click that to change the picture).

Simply choose Upload picture, paste in the URL of the logo you want to use and you’re off to the races.

Figuratively speaking, anyway. You might have to jigger about with the proportions of the image by downloading it first and editing it elsewhere, as the image will need to be more-or-less square. Built-in icons in Teams appear to be 240x240 pixels in size so you could try to target that if you’re resizing.

Tip o’ the Week 459 – Building a better phone UI

$
0
0

clip_image002Microsoft fangrlz and fanbois, shed a tear for the Windows Phone platform, which relaunched with some fanfare just over 8 years ago as “Windows Phone 7 Series” (recalling the Microsoft redesigns the iPod packaging spoof?). The original idea with the new platform was that you didn’t need to jump in and out of apps all the time, since apps surfaced their info on the home screen and to a series of Hubs. Check out the original 2010 advert that painted the vision (fairly) clearly…

The hastily-renamed Windows Phone 7 showed up in November 2010, and came with a comparatively lavish marketing budget, bring some quite edgy and memorable adverts – like the Season of the Witch, or Really? (try not to boke at the scene where the guy drops his phone…)

A year later, and almost 7 years ago to this day, Canadian DJ and electro-music producer Deadmau5 played an amazing light show in London to celebrate the launch of the first Nokia Lumia phone; the fact that his track “Bad Selection” was the one that showcased what the phone looked like did raise a snigger at the time. He was back a year later with another event to celebrate the launch of the Lumia 820 with Windows Phone 8.

Now that Windows Phone has been in the ground for more than a year, it’s worth celebrating its somewhat spiritualclip_image004 successor – the Microsoft Launcher for Android (see ToWs passim, #345 and #438). One of the upsides of the Android platform is the fact you can effectively re-write the main UI, and most phone manufacturers ship their own variants of common apps (like Contacts, Phone, Messaging etc), so it’s ripe for customizing.

clip_image006The Launcher brings some of the design elements of Windows Phone to Android, while building in great new ideas – like the swipe right to the “Glance” screen, Bing visual search, Timeline integration with Windows PCs and more.

The Microsoft Launcher has had more than 10 million downloads and has a rating of 4.6 / 5, with over 750,000 reviews – and it’s recognised by many commentators as one of the best Android launchers, even in such a crowded market.

If you’re up for trying out a new release, sign up to be a tester for the Microsoft Launcher Beta – currently offering a major update (5.1) that includes better Cortana functionality, To-Do and Sticky Notes synch from PCs and more. See details here. Join the community here (Google+ is still a thing – who knew?)

The beta even has a new “Screen time” function that promises to tell you how often and how long you use the phone, and with which apps. Google has shipped a “Digital Wellbeing” feature for its latest Android release (v9 aka Android “Pie”), but many phones won’t get that release for ages, if at all. Microsoft Launcher works on Android 4.2 and later.

Nachhaltigkeit wird belohnt

$
0
0

Logbucheintrag 181206:

Satya Nadella hat mit seinem Strategiewechsel für Microsoft Kurs auf die zentralen Themen der digitalen Transformation genommen: wenn alles und jeder Teil einem weltumspannenden digitalen Netzwerkes wird, dann braucht es Unternehmen mit Weitblick, die die Verantwortung für diese komplexe Infrastruktur übernehmen. Deshalb investiert Microsoft seit Jahren in Verfügbarkeit und Sicherheit der Cloud-Rechenzentren einerseits und in die Weiterentwicklung von Cloud-Services andererseits. Und zusätzlich investiert Microsoft in die Qualifikation der eigenen Mitarbeiter – vor allem bei zukunftsorientierten Technologien wie zum Beispiel künstliche Intelligenz – und der Mitarbeiter im Partner-Ökosystem.

So haben wir unser organisches Wachstum völlig neu um die „Intelligent Cloud, Intelligent Edge“ strukturiert. Wir denken nicht von Produktankündigung zu Produktankündigung, sondern haben die Produktivität jedes einzelnen Anwenders, jeder einzelnen Organisation im Blick. Das zeigt sich unter anderem darin, dass immer mehr Kunden zugleich auch Partner werden, indem sie ihre Aktivitäten auf der Azure-Plattform zum Kern ihrer eigenen Geschäftsstrategie und Produktpolitik machen. So entsteht nachhaltiges Wachstum für alle.

Azure ist unverändert die am schnellsten wachsende Cloud-Plattform der Welt. Microsoft-Partner, die ihr Geschäftsmodell um die damit verbundenen Technologien herum ausrichten, haben nachweislich größere Wachstums- und Gewinnchancen. Je mehr Partner sich jetzt für diesen Weg entscheiden, desto größer wird der gemeinsame Anteil am Marktvolumen, das schon jetzt mehr als fünf Prozent der weltweiten Wertschöpfung ausmacht. Dieser Anteil wird sich in den kommenden zehn Jahren verdoppeln. Der Blogger Heinz-Paul Bonn hat jetzt darauf aufmerksam gemacht, dass sich die Partnerstrukturen ändern.

Auch die Börse belohnt Nachhaltigkeit. Vergangene Woche wurde Microsoft – wenn auch nur kurzzeitig – das wertvollste Unternehmen der Welt. Zwar verläuft der Anstieg der Microsoft-Aktie langsamer als bei anderen IT-Größen. Aber in Zeiten sinkender Kurse zeigt die Microsoft-Aktie mehr Widerstandskraft, weil die Analysten die Strategie des nachhaltigen Cloud-Ökosystems honorieren.

Dahinter steckt ein kultureller Wandel bei Microsoft, der einen Namen hat: Satya Nadella. Es geht ihm nicht darum, in jeder Show „one more thing“ anzukündigen, sondern den Kurs in allen Disziplinen der Marktführerschaft beizubehalten. Er zielt auf die gesellschaftliche Verantwortung, die Microsoft in 43 Jahren auf sich genommen hat. Die Weltwirtschaft könnte vielleicht ohne Smartphones prosperieren, aber sie würde ohne verlässliches Cloud Computing in sich zusammenbrechen. Wir als Gesellschaft könnten uns auch ohne soziale Medien weiterentwickeln, aber wir könnten es nicht ohne die persönlichen Produktivitätswerkzeuge, mit denen wir uns umgeben.

Denn alles und jedes wird in absehbarer Zeit von Daten getrieben, von Algorithmen gesteuert und von der Cloud getragen werden. Das ist die Herausforderung, der sich Microsoft stellt. Deshalb arbeiten wir an nachhaltigen Digitalstrategien. Ihre Meilensteine bestehen nicht aus immer wiederkehrenden „one more things“, sondern aus Plattformen für Sicherheit und Zuverlässigkeit. Und wie die Börse zeigt: Nachhaltigkeit wird belohnt.

 

Networking in OpenShift for Windows

$
0
0

Hello again,

Today we will be drilling into a more complex topic following the introduction to Red Hat OpenShift for Windows on premises two weeks ago. We will expand into the networking layer of the architecture that we have chosen for the current developer previews.

You may ask yourself "Why do I care about how networking works?"
The obvious answer would be "Without it your container cannot listen or talk much to others."
What do I mean by that; networking is the backbone of any IT infrastructure and container deployments are no different from that. The various networking components allow communication of containers, nodes, pods, clusters amongst each other and the outside world.

As a DevOps you will need to have a core understanding of the networking infrastructure pieces that are deployed in your container infrastructure and how they interact, be it bare-metal, VMs on a virtualization host or in one of the many cloud services provided so you can tailor the network setup to your needs.

Terminology

First let's cover a few buzzwords, TLAs and other complex things so we are all on the same page

Terminology Description
CNI Container Networking Interface, a specification of a standardized interface defining the container endpoint and its interaction with the node the container runs on.
Docker A popular container runtime.
vSwitch Virtual Switch, the central component in container networking. Every container host has one. It serves up the basic connectivity for each container endpoint. On the Linux side it resembles somewhat to a Linux Bridge.
NAT Network Address Translation. A way to isolate private IP address spaces across multiple hosts and nodes behind a public IP Address space
Pod the smallest atomic unit in a Kubernetes Cluster. A Pod can host one or more containers. All Containers in a pod share the same IP address
Node An infrastructure component hosting one or more pods.
Cluster An infrastructure component comprised of multiple nodes.
HNS Host Network Service, a windows component interacting with the networking aspects of the Windows container infrastructure
HCS Host Compute Service, a Windows component supporting the interactions of the container runtime with the rest of the operating system
OVN Open Virtual Network. OVN provides network virtualization to containers. In the "overlay" mode, OVN can create a logical network amongst containers running on multiple hosts. In this mode, OVN programs the Open vSwitch instances running inside your hosts. These hosts can be bare-metal machines or vanilla VMs. OVN uses two data stores the Northbound (OVN-NB) and the Southbound  (OVN-SB) data store.
 ovn-northbound
  • OpenStack/CMS integration point
  • High-level, desired state
    • Logical ports -> logical switches -> logical routers

ovn-southbound

  • Run-time state
  • Location of logical ports
  • Location of physical endpoints
  • Logical pipeline generated based on configured and run-time state
OVS Open Virtual Switch. Open vSwitch is well suited to function as a virtual switch in VM environments. In addition to exposing standard control and visibility interfaces to the virtual networking layer, it was designed to support distribution across multiple physical servers.

Here is how all these components fit into the architecture on the Windows worker node. I will talk more about them through out the post.

a block diagram depicting the componets and their layers and relationship based on the table above

OpenShift for Windows Networking components

OK, now that we are on the same page let's dive in.

Setup

To recap from the last post, we will have a Linux Red Hat OpenShift Master node which also serves as the Kubernetes Master and a Windows Server Core Worker node which is joined to the Master. The deployment will also use the Docker container runtime on both the Linux and the Windows Node to instantiate and execute the containers.
You can deploy the nodes in one VM host, across multiple VM hosts, bare metal and also deploy more than two nodes in this environment. For the purpose of this discussion we have deployed a separate VM host and will use it to host both the Linux and the Windows Node.
Next lets dig into the networking and how the networks are created and how the traffic flows.

Networking Architecture

The image below shows the networking architecture in more detail and zooms into the above picture both on the Linux node and the Windows node.
Looking at the diagram below we can see that there are several components making up the networking layer

A block diagram depicting the two node architecture for the developer preview of OpenShift for Windows

OpenShift for Windows Networking Architecture

The components can be grouped into several groups:

  • Parts which are Open Source components (light orange)
  • Parts which are in the core Windows Operating System (bright blue).
  • Parts which are Open Source and Microsoft made specific changes to the code and shared them with the community (light blue).

On the Linux side Open Source Components are the container runtime like the Docker Engine, Kubernetes components such as

  • kube-proxy - (Kubernetes network proxy) which runs on each node and reflects services as defined in the Kubernetes API on each node for traffic forwarding across a set of backends.
  • kubelet - is the primary “node agent” that runs on each node. The kubelet works by reading a PodSpec object which is a YAML or JSON document that describes a pod.
  • To find out more about Kubernetes components on Linux check the Kubernetes documentation here.

On the Windows side some of these components like the kube-proxy and the kubelet have been enhanced by Microsoft to work with the Microsoft networking components such the Host Compute Service (HCS) and the Host Network Service (HNS). These changes are made to allow the interoperability with Windows core services and also the abstraction of the differences in the underlying architecture.

On the Windows side some of these components like the kube-proxy and the kubelet have been enhanced by Microsoft to work with the Microsoft networking components such the Host Compute Service (HCS) and the Host Network Service (HNS). These changes are made to allow the interoperability with Windows core services and the abstraction of the differences in the underlying architecture.

One of the differences between Linux Nodes and Windows Nodes in this system is the way the nodes are joined to the Kubernetes cluster. In Linux you would use a command like
kubeadm join 10.127.132.215:6443 --token <token> --discovery-token-ca-cert-hash <cert hash>

On Windows where the kubeadm command is not available the join is handled by the Host Compute Service when the resource is created.

The key takeaway of the discussion here is that overall the underlying architectural differences between Linux and Windows are abstracted and the process of setting up Kubernetes for Windows and managing the networking components of the environment is going to be straight forward and mostly familiar if you have done it on Linux before.
Also since OpenShift calls into Kubernetes the administrative experience will be uniform across Windows and Linux Nodes.
That being said, be what we are discussing today is the architecture of the currently available developer preview. Microsoft and Red Hat are working to completed work to integrate the Windows CNI into the flow to replace OVN/OVS. We will keep the support for OVN/OVS and also add other CNI plugins as we progress but will switch to Windows CNI during the first half of 2019. So be on the lookout for an update on that.

To say it with a famous cartoon character of my childhood "That's all folks!"

Thanks for reading this far and see you next time.

Mike Kostersitz

P.S.: If this post was too basic or too high-level. Stay tuned for a deeper dive into Windows Container Networking Architecture and troubleshooting common issues coming soon to this blog near you.

 

How to fix the ATA Light Gateway installation Error 0x80096005: Failed to cache payload/ Failed to verify payload

$
0
0

7 December 2018

Recently I was doing a review of a Microsoft ATA installation with a customer when we started facing the following symptoms:

  • ATA center was complaining about an unresponsive gateway (Domain controller)
  • On the gateway involved, the Microsoft Advanced Thread Analytics Gateway service was stuck in “Starting” status
  • The memory was not over used and the ATA center URL was reachable from the gateway
  • Error 500 recorded on the Microsoft.Tri.Gateway-Errors.log file

As all other gateways where running fine, we first tried to delete the gateway object on the ATA center, did a reinstallation of the ATA gateway and rebooted the machine. The service still refused to start with same errors.

Finally, we took the time to look at the different ATA gateway logs to get the big picture and we notice these errors:

C:Program FilesMicrosoft Advanced Threat AnalyticsGatewayLogsMicrosoft.Tri.Gateway-Errors.log

Error [WebClient+<InvokeAsync>d__8`1] System.Net.Http.HttpRequestException: PostAsync failed [requestTypeName=StopNetEventSessionRequest] ---> System.Net.Http.HttpRequestException: Response status code does not indicate success: 500 (Internal Server Error).

C:Program FilesMicrosoft Advanced Threat AnalyticsGatewayLogsMicrosoft.Tri.Gateway.Updater.log

2018-10-19 10:22:09.9317 34888 21 Error [ManagementException] System.Management.ManagementException: Not found

at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode)

at System.Management.ManagementObject.Initialize(Boolean getObject)

at System.Management.ManagementBaseObject.get_ClassName()

at System.Management.ManagementClass.GetInstances(EnumerationOptions options)

at Microsoft.Tri.Gateway.Updater.Gateway.NetEventSessionManager.StopSessionAsync(StopNetEventSessionRequest request)

at async Microsoft.Tri.Gateway.Updater.Service.GatewayUpdaterWebApplication.<>c__DisplayClass3_0.<OnInitializeAsync>b__2(?)

at async Microsoft.Tri.Common.Communication.CommunicationHandler`2.InvokeAsync[](?)

Installation log:

C:UsersADMINI~1AppDataLocalTemp Microsoft Advanced Threat AnalyticsGateway_20181019192441.log

[12E0:12C0][2018-10-19T19:20:32]e000: Error 0x80096005: Failed authenticode verification of payload: C:ProgramDataPackage Cache.unverifiedvcRuntimeMinimum_x64

[12E0:12C0][2018-10-19T19:20:32]e000: Error 0x80096005: Failed to verify signature of payload: vcRuntimeMinimum_x64

[12E0:12C0][2018-10-19T19:20:32]e310: Failed to verify payload: vcRuntimeMinimum_x64 at path: C:ProgramDataPackage Cache.unverifiedvcRuntimeMinimum_x64, error: 0x80096005. Deleting file.

[12E0:12C0][2018-10-19T19:20:32]e000: Error 0x80096005: Failed to cache payload: vcRuntimeMinimum_x64

[0A0C:1F80][ 2018-10-19T19:20:32]e349: Application requested retry of payload: vcRuntimeMinimum_x64, encountered error: 0x80096005. Retrying...

[12E0:12C0][2018-10-19T19:20:32]e000: Error 0x80096005: Failed while caching, aborting execution.

An HTTP error 500 is a server-side error but in this scenario the key clue of the issue was on the “Microsoft.Tri.Gateway.Updater.log”. When we looked closely to the logs, we noticed that a “WMI get instances” call was failing for the NetEventSessionManager.

We tried to manually query the class with the following PowerShell command:

Get-WmiObject -Namespace rootstandardcimv2 -class “MSFT_NetEventSession” | Select Name

Result: blank output, the class is no more registered or corrupted.

To register a WMI class, we need to do an operation called “MOF recompiling”. As the installation setup failed to do it and maybe another class in the same situation, we took the decision to rebuild the entire WMI repository.

Notice that a rebuild of the repository reset the entire WMI database and recompile all registered .MOF file listed on the following registry key:

HKEY_LOCAL_MACHINESOFTWAREMicrosoftWbemCIMOM -> “Autorecover MOFs”

clip_image002

It’s not uncommon that some old third-party software doesn’t register their .mof and you must either manually compile it using the built-in mofcomp.exe or repair/reinstall the according software.

You are on a Domain Controller right? Very sensitive machine it isn’t? How many (outdated) third-party software do you have? Let’s keep the focus on ATA problem.

Steps used to reset the WMI repository:

  1. Sc config winmgmt start= disabled
  2. Net stop winmgmt /y
  3. Winmgmt /resetrepository
  4. Sc config winmgmt start= auto
  5. Net start winmgmt

Rebuilding the WMI repository can take few minutes depending on the system speed, the number and the content of .MOF files. Don’t stress the machine and take a 2 minutes break.

If you run again the PowerShell query, you should be able to retrieve this information:

image

Finally, we looked at the ATA center portal and confirmed the good health status for all gateways.

Conclusion

The ATA expert inside you knows that an extended blank period of communication between a gateway and the ATA center is not a good thing.

ATA abnormal behaviors are detected by using behavioral analytics and leveraging Machine Learning. A non-healthy gateway lead to an amount of information's definitely lost.  Some false positive alerts can then be triggered and will require a precious investigation time or worst, you can miss real suspicious activities.

References:

Troubleshooting ATA using the ATA logs
https://docs.microsoft.com/en-us/advanced-threat-analytics/troubleshooting-ata-using-logs

Xbox One X も Xbox One S も。2018 年 12 月 10 日から 2018 年 12 月 31 日まで「Xbox One 本体どれでも 5,000 円 OFF」キャンペーンを実施

$
0
0

日本マイクロソフト株式会社 (本社: 東京都港区) は、2018 年 12 月 10 日 (月) から 2018 年 12 月 31 日 (月) まで、Xbox One 本体から 5,000 円 (税抜) 引きで販売する「Xbox One 本体どれでも 5,000 円 OFF セール キャンペーン」を実施します。

「Xbox One 本体どれでも 5,000 円 OFF セール キャンペーン」対象製品

対象製品
Xbox One 本体全製品

* お客様の購入価格は、販売店により決定されますので、販売店にお問い合わせ下さい。
* すべての Xbox One 本体単体製品、ゲーム同梱本体製品を含みます。
* 本キャンペーンの実施有無については販売店にお問い合わせください。

その他、現在発売中のすべての『Xbox One』本体および基本情報については製品サイトをご参照ください。

Xbox One 製品サイト

.none{display:none;}
table {
width: 100%;
border-top: 1px solid #ddd;
border-right: 1px solid #ddd;
border-collapse: collapse;
}
table th,
table td {
border-left: 1px solid #ddd;
border-bottom: 1px solid #ddd;
padding: 10px;
 text-align: center;
}
table th {
background-color: #f7f7f7;
}
.price {
text-align: right;
}

@media screen and (max-width: 600px) {
table {
font-size: 12px;
}
}

Auditing Changes in Azure Security Center Configuration

$
0
0

Azure Security Center uses Role-Based Access Control (RBAC), which provides built-in roles that can be assigned to users, groups, and services in Azure. When planning to adopt Security Center, make sure to read the Permissions in Azure Security Center article for more information about the key roles and the actions that these roles can perform.

A question that comes up very often is: how can I check who changed my Security Center configuration? Let’s use a very simple example: I noticed that my security contact information only has one email, and used to be three. When that happen? To investigate when this change took place, you can use Azure Activity Log. For this particular case, you should see an entry similar to the one below:

If you click in the Delete security contact action that succeeded, you will have access to the JSON content, and there you will be able to see more info regarding who performed this action and when it was done.

The use of Activity Log can be also very useful to see who made changes in the Azure Security Center Security Policy. Let’s use as an example the policy below. This policy in the past was configured as AuditIfNotExists and now is showing as disabled:

Again, you want to understand: when this change was done and who did it. Using Azure Activity Log, looks for the operation below:

Note: if you have a lot of activities, you should add a filter where Operation is Create policy assignment.

Open the JSON for this operation, and there investigate the requestbody, and you will see that the MFA policy that was changed to disabled is there:

 


Join our community call to hear the latest from Modern Workplace

$
0
0

Thanks to everyone who has joined the Modern Workplace Partner Technical Community calls. So far this quarter, we’ve covered a number of topics during our monthly call series, including:

  • Enabling Firstline Workers with Microsoft Teams
  • Azure Security Center
  • WaaS and Microsoft Managed Desktop Service
  • Microsoft Information Protection and Unified Labeling in Microsoft 365
  • Azure AD Privileged Identity Management
  • Virtual Desktop and Virtual Appliances

If you missed any of these calls, the call decks are available via our Yammer community, and the call recordings are available on the registration page.

The next call will be our Quarterly Update call, where we’ll discuss key announcements made during the past quarter. This call will include the 3 key areas of the Modern Workplace solution area: Productivity, Security, and Modern Desktop.

Register for the December 14 Modern Workplace Partner Community Call to get a quick update of changes to Productivity, EM+S, and Windows & Devices solutions.

We’ll be talking about announcements like increased capabilities for Microsoft SharePoint Framework. Since its launch 20 months ago, SharePoint Framework has continually evolved to support an expanding set of use cases—such as business data dashboards and document integrations—and has delivered company-wide links and tools. Now in its eighth update after launch, SharePoint Framework has added even more new capabilities, including easier deployment options across your Microsoft Office 365 sites, the ability to use Office 365 to host application elements, and built-in capabilities to work with a variety of web services. SharePoint Framework also works with on-premises SharePoint Server 2019 and SharePoint Server 2016 via Feature Pack 2.

During the call we’ll also discuss more information regarding the recent Windows Virtual Desktop announcement. Windows Virtual Desktop, the best virtualized Windows and Office experience delivered on Microsoft Azure. Windows Virtual Desktop is the only cloud-based service that delivers a multi-user Windows 10 experience, optimized for Office 365 ProPlus, and includes free Windows 7 Extended Security Updates. With Windows Virtual Desktop, you can deploy and scale Windows and Office on Azure in minutes, with built-in security and compliance.

We’ll cover these topics and more during the call. Be sure to register and take advantage the opportunity to hear about key announcements centered on Modern Workplace during the past quarter, and get a preview of what’s coming next quarter.

Check out these resources to learn more:

Register here for the upcoming Modern Workplace December Partner Community Call on Friday, December 14 at 10 am PT.

Modern Workplace Technical Community

 

 

Latest Microsoft 365 update (formerly Office 365 update) video, resources and transcript now available

$
0
0

Hello again to our faithful viewers and welcome to those who just discovered us. As you can see from the title of this article, we have changed the name of our video series.

We have updated the URL to align with the change. While https://aka.ms/o365update-youtube will still work for the foreseeable future, the official URL is now https://aka.ms/m365update-youtube. This month's transcript, including links to additional information on everything we cover, can be accessed by clicking on the following link: Microsoft-365-update-transcript-and-resources-guide.

Outlook アドインで POST 送信が GET に変換される

$
0
0

こんにちは。日本マイクロソフト Outlook サポート チームです。
この記事では、Outlook アドイン (Web アドイン) 開発者の方向けに、アドインからの POST 送信が GET に変換される事象について説明します。

 

現象
1. Outlook アドインで submit ボタンを持つ作業ウィンドウを作成します。

 

2. 作業ウィンドウの HTML にて以下のように method を POST とした form を作成します。

<form action="http://www.contoso.msft/test/index.aspx" name="form1" method="post">
<input type="submit" value="テスト">
<input type="hidden" name="test01" value="0001">
</form>

 

3. ボタンをクリックすると、IE の新しいウィンドウがオープンし、アドレスバーに以下の URL が表示されます。

http://www.contoso.msft/test/index.aspx

 

4. このときのトラフィックを確認すると、コード上は POST 指定しているにもかかわらず、クライアントから以下のヘッダーでリクエストが送信されています。

GET /test/index.aspx HTTP/1.1

 

5. サーバー側でも GET としてリクエストが受信され、以下のようなログが記録されます。

2018-12-06 00:00:01 10.0.0.4 GET /test/index.aspx - 443 - 192.168.1.1 Mozilla/5.0+(Windows+NT+6.3;+WOW64;+Trident/7.0;+Touch;+rv:11.0)+like+Gecko - 200 0 0 29

 

上記 3 のとおり POST 指定のため URL にパラメーターは含まれません。パラメーター無しのまま GET 送信に変換されているため、この例の場合、パラメーター名 test01 の情報が消失します。

 

発生条件
Outlook for Windows の場合に発生します。Outlook on the Web などでは発生しません。

 

原因
Outlook アドインは作業ウィンドウを IE のコンポーネントを使用して開きます。
作業ウィンドウから form を submit すると、IE のウィンドウが新しく表示されますが、その際に IE のコンポーネント内で動作制限が生じ、POST から GET に変換されます。

 

回避策
POST 送信を維持する方法はありません。
HTML 上で method="get" を指定し、はじめから GET 送信とすることで、パラメーターを維持することは可能です。

 

________________________________________

本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

Exchange Online Fiddler Extension 1.0.60

$
0
0

The Exchange Online Fiddler Extension (EXO Fiddler Extension) version 1.0.60 is now available for download: https://github.com/jprknight/EXOFiddlerExtension/releases

This version adds significant functionality in the authentication space.

  • An authentication column has been added giving key information on which sessions contain authentication headers / data.
  • An 'Office365 Auth' inspector tab has been added showing more detailed information on authentication.
  • SAML response parser in the 'Office365 Auth' tab. See the SAML responses, see the signing certificate.

Screenshots showing what this release brings.

New Authentication tab:

'Office365 Auth' inspector tab, generally you will see this view apart from...

...when you highlight a SAML request/response session. Then you will see the below where you can confirm SAML token issuer, responses which include the UserPrincipalName and ImmutableID, and open the signing certificate.

How to discover and monitor a clustered application

$
0
0

 

image

Monitoring clustered applications is a little tricky.

Clusters contains virtual, abstract objects, that could be hosted by one or more nodes.  We need to ensure we always monitor the clustered resource, no matter where it is running.

We cannot simply target discovery and monitoring to the nodes, because by design the clustered resource will only exist on one node, then all other nodes would generate alerts.

In SCOM, for every clustered resource group that contains an IP Address and a Network Name, we will discover the network name as a Windows Computer object.  We will use this “virtual” Windows Computer as the host for our clustered application class.

Essentially – this will be an MP that contains two discoveries – one lightweight discovery for the “Seed” class, then one more granular script based discovery to find only your clustered application class instances.

We start with a simple seed class discovery.  The purpose of the seed class is to discover all nodes and virtual computers that “could” host the resource.  In this example, I will use a simple registry discovery, and ensure Remoteable = true.  Remoteable allows the nodes to discover for the virtual computers.  Try to use something specific to your application, such as the existence of a specific service registry for this discovery.  This seed discovery will target “Windows Server” class.  The reason for this, is that we want to pass a property of the Windows Server Class “IsVirtualNode” to our seed class.  This Windows Server class already has a property to know if the object is a cluster virtual object (true) or not (empty).

Next, we will use a script based discovery, and *target* the seed class instances.  One of the first things we will pass to the script as a parameter, is the IsVirtualNode property.  The script will only return discovery data for virtual objects.  Then it will filter further, finding only the instance of your application, through a name, a file, a service, or however you choose to discover it.

 

Here is the MP example:

The class definitions:

<TypeDefinitions> <EntityTypes> <ClassTypes> <ClassType ID="Demo.MyClusteredApp.Seed.Class" Accessibility="Public" Abstract="false" Base="Windows!Microsoft.Windows.LocalApplication" Hosted="true" Singleton="false" Extension="false"> <Property ID="IsVirtualNode" Type="string" AutoIncrement="false" Key="false" CaseSensitive="false" MaxLength="256" MinLength="0" Required="false" Scale="0" /> </ClassType> <ClassType ID="Demo.MyClusteredApp.Clustered.Class" Accessibility="Public" Abstract="false" Base="Windows!Microsoft.Windows.LocalApplication" Hosted="true" Singleton="false" Extension="false"> <Property ID="ClResourceGroupName" Type="string" AutoIncrement="false" Key="false" CaseSensitive="false" MaxLength="256" MinLength="0" Required="false" Scale="0" /> <Property ID="ClResourceName" Type="string" AutoIncrement="false" Key="false" CaseSensitive="false" MaxLength="256" MinLength="0" Required="false" Scale="0" /> </ClassType> <ClassType ID="Demo.MyClusteredApp.ComputersAndWatchers.Group" Accessibility="Public" Abstract="false" Base="SCIG!Microsoft.SystemCenter.InstanceGroup" Hosted="false" Singleton="true" Extension="false" /> </ClassTypes> </EntityTypes> </TypeDefinitions>

 

The seed discovery:

In this example, I am using a very simple seed discovery, such as the existence of the print spooler service.  Normally, you’d want something specific to your application:

<Discovery ID="Demo.MyClusteredApp.Seed.Class.Discovery" Enabled="true" Target="Windows!Microsoft.Windows.Server.Computer" ConfirmDelivery="false" Remotable="true" Priority="Normal"> <Category>Discovery</Category> <DiscoveryTypes> <DiscoveryClass TypeID="Demo.MyClusteredApp.Seed.Class"> <Property PropertyID="IsVirtualNode" /> </DiscoveryClass> </DiscoveryTypes> <DataSource ID="DS" TypeID="Windows!Microsoft.Windows.FilteredRegistryDiscoveryProvider"> <ComputerName>$Target/Property[Type="Windows!Microsoft.Windows.Computer"]/PrincipalName$</ComputerName> <RegistryAttributeDefinitions> <RegistryAttributeDefinition> <AttributeName>SeedRegKeyExists</AttributeName> <Path>SYSTEMCurrentControlSetServicesSpooler</Path> <PathType>0</PathType> <!-- 0=regKey 1=regValue --> <AttributeType>0</AttributeType> <!-- 0=CheckIfExists (boolean) 1=treat data as string 2=treat data as INT--> </RegistryAttributeDefinition> </RegistryAttributeDefinitions> <Frequency>14400</Frequency> <ClassId>$MPElement[Name="Demo.MyClusteredApp.Seed.Class"]$</ClassId> <InstanceSettings> <Settings> <Setting> <Name>$MPElement[Name="Windows!Microsoft.Windows.Computer"]/PrincipalName$</Name> <Value>$Target/Property[Type="Windows!Microsoft.Windows.Computer"]/PrincipalName$</Value> </Setting> <Setting> <Name>$MPElement[Name="System!System.Entity"]/DisplayName$</Name> <Value>$Target/Property[Type="Windows!Microsoft.Windows.Computer"]/PrincipalName$</Value> </Setting> <Setting> <Name>$MPElement[Name="Demo.MyClusteredApp.Seed.Class"]/IsVirtualNode$</Name> <Value>$Target/Property[Type="Windows!Microsoft.Windows.Server.Computer"]/IsVirtualNode$</Value> </Setting> </Settings> </InstanceSettings> <Expression> <SimpleExpression> <ValueExpression> <XPathQuery Type="Boolean">Values/SeedRegKeyExists</XPathQuery> </ValueExpression> <Operator>Equal</Operator> <ValueExpression> <Value Type="Boolean">true</Value> </ValueExpression> </SimpleExpression> </Expression> </DataSource> </Discovery>

 

The script:

In this example, I first check IsVirtualNode = true, then if so, continue to find instances of my clustered application.  You will need to customize this part for each app you wish to discover.

#================================================================================= # Discover Clustered Application # This discovery runs against a seed class target to find a specific clustered app # # Author: Kevin Holman # v1.0 #================================================================================= param($SourceId,$ManagedEntityId,$ComputerName,$IsVirtualNode) # Manual Testing section - put stuff here for manually testing script - typically parameters: #================================================================================= # $SourceId = '{00000000-0000-0000-0000-000000000000}' # $ManagedEntityId = '{00000000-0000-0000-0000-000000000000}' # $ComputerName = "computername.domain.com" # $IsVirtualNode = $true #================================================================================= # Constants section - modify stuff here: #================================================================================= # Assign script name variable for use in event logging. # ScriptName should be the same as the ID of the module that the script is contained in $ScriptName = "Demo.MyClusteredApp.Clustered.Class.Discovery.ps1" $EventID = "9051" #================================================================================= # Starting Script section - All scripts get this #================================================================================= # Gather the start time of the script $StartTime = Get-Date #Set variable to be used in logging events $whoami = whoami # Load MOMScript API $momapi = New-Object -comObject MOM.ScriptAPI #Log script event that we are starting task $momapi.LogScriptEvent($ScriptName,$EventID,0,"`n Script is starting. `n Running as ($whoami). `n ComputerName: ($ComputerName) `n IsVirtualNode: ($IsVirtualNode)") #================================================================================= # Discovery Script section - Discovery scripts get this #================================================================================= # Load SCOM Discovery module $DiscoveryData = $momapi.CreateDiscoveryData(0, $SourceId, $ManagedEntityId) #================================================================================= # Begin MAIN script section #================================================================================= #Only continue if this is a cluster virtual IF (!($IsVirtualNode)) { # Log an event to end discovery $momapi.LogScriptEvent($ScriptName,$EventID,0,"`n IsVirtualNode = false. Do not return discovery data.") } ELSE { # Log an event to continue discovery $momapi.LogScriptEvent($ScriptName,$EventID,0,"`n IsVirtualNode = true. `n We will continue discovery on this cluster object. `n ComputerName: ($ComputerName)") # ====== BEGIN your custom filter for clustered app ====== # Now we need to create a filter so that only the app we are looking for is discovered # This might be using a cluster powershell filter, WMI query, or looking for a file or process or however makes sense for your app # This example will discover any cluster resource with a NetworkName object type, that has a DNS name value that matches the target computer as an example only # Import cluster PS module Import-Module FailoverClusters #Get all the cluster resources on this cluster [array]$ClResources = Get-ClusterResource | where {$_.ResourceType.Name -eq "Network Name"} FOREACH ($ClResource in $ClResources) { #Get the Cluster Resource Name [string]$ClResourceName = $ClResource.Name #Get the NetBIOS name from the ComputerName passed as a param to the script [string]$ComputerNameSplit = ($ComputerName.Split("."))[0] #Get the DNS name from the cluster network name object $ClDNSNameObj = $ClResource | Get-ClusterParameter -Name DnsName [string]$ClDNSName = $ClDNSNameObj.Value #Get only the NetBIOS name for comparison [string]$ClDNSNameSplit = ($ClDNSName.Split("."))[0] IF ($ComputerNameSplit -eq $ClDNSNameSplit) { #Discover stuff [string]$ClResourceGroupName = $ClResource.OwnerGroup.Name $momapi.LogScriptEvent($ScriptName,$EventID,0,"`n Adding discovery data for: `n ComputerName () `n Cluster Resource Name: ($ClResourceName) `n Cluster Resource Group Name: ($ClResourceGroupName)") $instance = $DiscoveryData.CreateClassInstance("$MPElement[Name='Demo.MyClusteredApp.Clustered.Class']$") $instance.AddProperty("$MPElement[Name='Windows!Microsoft.Windows.Computer']/PrincipalName$", $ComputerName) $instance.AddProperty("$MPElement[Name='System!System.Entity']/DisplayName$", $ComputerName) $instance.AddProperty("$MPElement[Name='Demo.MyClusteredApp.Clustered.Class']/ClResourceName$", $ClResourceName) $instance.AddProperty("$MPElement[Name='Demo.MyClusteredApp.Clustered.Class']/ClResourceGroupName$", $ClResourceGroupName) $DiscoveryData.AddInstance($instance) } } # ====== END your custom filter for clustered app ====== } # Return Discovery Items Normally $DiscoveryData # Return Discovery Bag to the command line for testing (does not work from ISE) # $momapi.Return($DiscoveryData) #================================================================================= # End MAIN script section # End of script section #================================================================================= #Log an event for script ending and total execution time. $EndTime = Get-Date $ScriptTime = ($EndTime - $StartTime).TotalSeconds $momapi.LogScriptEvent($ScriptName,$EventID,0,"`n Script Completed. `n Script Runtime: ($ScriptTime) seconds.") #================================================================================= # End of script

 

Now, as you can see – this gets pretty complicated to write for each clustered application.

However, it doesn’t have to be complicated.  We can wrap all this up into a Fragment, and reuse it overt and over!  I am adding example fragments to my fragment library:  https://gallery.technet.microsoft.com/SCOM-Management-Pack-VSAE-2c506737

These fragments will make it super easy to reuse this code sample, and just tweak it for your specific clustered apps.

Class.And.Discovery.ClusteredApp.RegistrySeed.mpx

Combo.Class.Discovery.ClusteredApp.RegistrySeed.ComputerWatcherGroup.Views.Folder.mpx

 

When I use the combo fragment, I can see my work pay off instantly.  I only need to provide 3 pieces of information:

The CompanyID, the AppName, and the RegKeyPath for the SEED class.  That’s it!

image

Then I can literally import this into SCOM.  Of course, you will want to customize the sample script first, but all the MP heavy lifting is done.

I have a nice prebuilt folder and state views:

image

 

I have discovered instances of my SEED class based on the registry:

image

 

I have discovered instances of my clustered applications as well, including properties about the cluster resource and group they are from:

image

 

This makes something pretty hard, really easy to use.  3 simple pieces of input information, customize the script, and you are done.

 

To see more about MP fragments, check out the basics:

https://blogs.technet.microsoft.com/kevinholman/2016/06/04/authoring-management-packs-the-fast-and-easy-way-using-visual-studio/

https://www.youtube.com/watch?v=9CpUrT983Gc

https://blogs.technet.microsoft.com/kevinholman/2017/03/22/management-pack-authoring-the-really-fast-and-easy-way-using-silect-mp-author-and-fragments/

https://www.youtube.com/watch?v=Vyo-Ic1Wf9E

Top Contributors Awards! SQL Server In-Memory OLTP: Transaction Isolation Levels and many more!

$
0
0

Welcome back for another analysis of contributions to TechNet Wiki over the last week.

First up, the weekly leader board snapshot...

 

As always, here are the results of another weekly crawl over the updated articles feed.

 

Ninja Award Most Revisions Award
Who has made the most individual revisions

 

#1 Peter Geelen with 69 revisions.

 

#2 Dave Rendón with 50 revisions.

 

#3 Nonki Takahashi with 16 revisions.

 

Just behind the winners but also worth a mention are:

 

#4 George Chrysovaladis Grammatikos with 14 revisions.

 

#5 RajeeshMenoth with 13 revisions.

 

#6 Ed Price - MSFT with 13 revisions.

 

#7 Somdip Dey - MSP Alumnus with 13 revisions.

 

#8 Kareninstructor with 8 revisions.

 

#9 Mohsin_A_Khan with 7 revisions.

 

#10 [Kamlesh Kumar] with 7 revisions.

 

 

Ninja Award Most Articles Updated Award
Who has updated the most articles

 

#1 Peter Geelen with 33 articles.

 

#2 Dave Rendón with 20 articles.

 

#3 RajeeshMenoth with 9 articles.

 

Just behind the winners but also worth a mention are:

 

#4 Somdip Dey - MSP Alumnus with 8 articles.

 

#5 Ed Price - MSFT with 7 articles.

 

#6 Nonki Takahashi with 5 articles.

 

#7 Stephan Bren with 2 articles.

 

#8 George Chrysovaladis Grammatikos with 2 articles.

 

#9 Leon Laude with 2 articles.

 

#10 Richard Mueller with 2 articles.

 

 

Ninja Award Most Updated Article Award
Largest amount of updated content in a single article

 

The article to have the most change this week was SharePoint Upgrade: Simple OOTB Inventory Methods Useful for Resolving Missing Dependencies, by Stephan Bren

This week's reviser was Stephan Bren

 

Ninja Award Longest Article Award
Biggest article updated this week

 

This week's largest document to get some attention is SSRS: Manage all report subscriptions, by aduguid

This week's reviser was Peter Geelen

 

Ninja Award Most Revised Article Award
Article with the most revisions in a week

 

This week's most fiddled with article is Small Basic Known Issue: 52240 - Zoomed Triangle Position to Move is Different in Remote, by Nonki Takahashi. It was revised 10 times last week.

This week's reviser was Nonki Takahashi

 

Ninja Award Most Popular Article Award
Collaboration is the name of the game!

 

The article to be updated by the most people this week is ASP.NET Web Applications: How to Avoid Session Hijacking , by Suthish Nair

This week's revisers were Peter Geelen, George Chrysovaladis Grammatikos, RajeeshMenoth & Somdip Dey - MSP Alumnus

 

Ninja Award Ninja Edit Award
A ninja needs lightning fast reactions!

 

Below is a list of this week's fastest ninja edits. That's an edit to an article after another person

 

Ninja Award Winner Summary
Let's celebrate our winners!

 

Below are a few statistics on this week's award winners.

Most Revisions Award Winner
The reviser is the winner of this category.

Peter Geelen

Peter Geelen has been interviewed on TechNet Wiki!

Peter Geelen has featured articles on TechNet Wiki!

Peter Geelen has won 255 previous Top Contributor Awards. Most recent five shown below:

Peter Geelen has TechNet Guru medals, for the following articles:

Peter Geelen's profile page

Most Articles Award Winner
The reviser is the winner of this category.

Peter Geelen

Peter Geelen is mentioned above.

Most Updated Article Award Winner
The author is the winner, as it is their article that has had the changes.

Stephan Bren

Stephan Bren has won 2 previous Top Contributor Awards:

Stephan Bren has not yet had any interviews, featured articles or TechNet Guru medals (see below)

Stephan Bren's profile page

Longest Article Award Winner
The author is the winner, as it is their article that is so long!

aduguid

Anthony Duguid has won 3 previous Top Contributor Awards:

Anthony Duguid has not yet had any interviews, featured articles or TechNet Guru medals (see below)

Anthony Duguid's profile page

Most Revised Article Winner
The author is the winner, as it is their article that has ben changed the most

Nonki Takahashi

Nonki Takahashi has been interviewed on TechNet Wiki!

Nonki Takahashi has featured articles on TechNet Wiki!

Nonki Takahashi has won 12 previous Top Contributor Awards. Most recent five shown below:

Nonki Takahashi has TechNet Guru medals, for the following articles:

Nonki Takahashi's profile page

Most Popular Article Winner
The author is the winner, as it is their article that has had the most attention.

Suthish Nair

This is the first Top Contributors award for Suthish Nair on TechNet Wiki! Congratulations Suthish Nair!

Suthish Nair has not yet had any interviews, featured articles or TechNet Guru medals (see below)

Suthish Nair's profile page

Ninja Edit Award Winner
The author is the reviser, for it is their hand that is quickest!

Dave Rendón

Dave Rendón has been interviewed on TechNet Wiki!

Dave Rendón has won 80 previous Top Contributor Awards. Most recent five shown below:

Dave Rendón has TechNet Guru medals, for the following articles:

Dave Rendón has not yet had any featured articles (see below)

Dave Rendón's profile page

 

 Says: Another great week from all in our community! Thank you all for so much great literature for us to read this week!

Please keep reading and contributing, because Sharing is caring..!!

 

Best regards,

 

Konec podpory SQL Server 2008 a Windows Server 2008

$
0
0

Konec podpory SQL Server 2008 a Windows Server 2008 se blíží rychleji, než se zdá!
Nenechte si ujít seminář "Konec podpory SQL Server 2008 a Windows Server 2008", který vysvětluje k čemu dojde, co se může stát a jak to řešit. Je určen nejen pro partnery ale i větší koncové zákazníky.

Blíží se ukončení podpory dvou často nasazovaných serverových produktů. Teď nastal ideální čas upgradovat, modernizovat a transformovat prostřednictvím aktuálních verzí systémů. V rámci tohoto semináře vás provedeme aktuální situací, představíme vám, jaké hrozby mohou nastat při běhu těchto verzí produktů i po ukončení podpory a zároveň vám ukážeme možnosti řešení.

Kdy a Kde:

13. 12. 2018 od 9:00 - 13:30
Budova společnosti Microsoft, konferenční místnost Praha v přízemí
Vyskočilova 1561/4a, Praha 4, 140 00, Praha

Program:

9:00-9:30 Snídaně a registrace

9:30 - 13:00

  • Souhrn Informací a seznámení s aktuální situace
  • Možné dopady a hrozby v případě běhu SQL Serveru 2008 a Windows Serveru 2008 i po ukončení podpory
  • Coffee Break
  • Možnosti řešení, možnosti migrace s rozšířeným supportem, možnosti přesunu do Microsoft Azure s rozšířenou možností supportu
  • Licencování

13:00 – 13:30 Oběd a ukončení semináře

Registrace:

Vstup je zdarma, předchozí registrace je však nutná!

 

// "Devs a Opsové" ... nepřehlédněte prezentace z právě ukončeného SQL Azure DevCampu, možná najdete nějakou další inspiraci 🙂 //

Těšíme se na setkání.


Analyzing Azure EA Consumption Using Power BI – Part 1

$
0
0

Analyzing the usage and tuning resources is a key responsibility in Cloud Management. We need to understand where we spent , what are the trends and where we can tune our spending. When it comes to analyzing Azure usage Microsoft offers different tool set with different capabilities;

  • Cloudyn
  • New Azure Cost Management
  • Azure Consumption API Connector for Power BI

In this blog post I would get into details of using Consumption API to bring usage metrics into Power BI and we will be using Power BI capabilities for analyzing the data. Power BI gives you the most flexibility when it comes to reporting and customization. In this  series I will get into Azure Inventory and Usage Analysis, utilizing tags, MoM /YoY analysis, distributing the cost between departments / sub companies , custom charge back options, combining consumption data with app usage and resource performance etc.

Using Azure Consumption Insights in Power BI Desktop

Since connecting data using new connector is well described in Azure Documentation, I will not get into details on how to connect. Please follow steps from
https://docs.microsoft.com/en-us/power-bi/desktop-connect-azure-consumption-insights  to connect your usage data.

When connecting to consumption insights I prefer to bring ;
1. Marketplace
2. PriceSheets
3. Summaries data with the connector

AzConInghDsselection

To get usage details I start with Get Data / Blank Query and use Advanced  Editor. This approach lets me to specify the number of months for usage data. Here in this sample I will bring last 3 months usage for the enrollment 100.

let
enrollmentNumber = "100",
optionalParameters = [ numberOfMonth = 3, dataType="DetailCharges" ],
data = MicrosoftAzureConsumptionInsights.Contents(enrollmentNumber, optionalParameters)
in
data

Finally I will rename this table to "Usage"

Power BI will ask for Enrollment Key and you are all set to download usage details into Power BI. Please note that depending on number of months to fetch , downloading all data might take a while.

Do not forget to save your work otherwise you might end up downloading the data again in case of a problem!

Shaping Data Cost by Subs/Resource Type/Date

Date Table

When we bring EA usage data into Power BI we first need to do a bit of modeling to be able to get most out of it. One of the first things is to bring a Date table. This table will help us with time intelligence in our calculations and help with filtering data from multiple  sources like log analytics / custom data.

Here I prefer to utilize a dynamic date function from https://gist.github.com/philbritton/9677152

By using this function we will create a dynamic date table covering the the dates in our Summaries Table using BillingMonth column.

Here is the M Query to generate dynamic date table;

let CreateDateTable = (StartDate as date, EndDate as date, optional Culture as nullable text) as table =>
let
DayCount = Duration.Days(Duration.From(EndDate - StartDate)),
Source = List.Dates(StartDate,DayCount,#duration(1,0,0,0)),
TableFromList = Table.FromList(Source, Splitter.SplitByNothing()),
ChangedType = Table.TransformColumnTypes(TableFromList,{{"Column1", type date}}),
RenamedColumns = Table.RenameColumns(ChangedType,{{"Column1", "Date"}}),
InsertYear = Table.AddColumn(RenamedColumns, "Year", each Date.Year([Date])),
InsertQuarter = Table.AddColumn(InsertYear, "QuarterOfYear", each Date.QuarterOfYear([Date])),
InsertMonth = Table.AddColumn(InsertQuarter, "MonthOfYear", each Date.Month([Date])),
InsertDay = Table.AddColumn(InsertMonth, "DayOfMonth", each Date.Day([Date])),
InsertDayInt = Table.AddColumn(InsertDay, "DateInt", each [Year] * 10000 + [MonthOfYear] * 100 + [DayOfMonth]),
InsertMonthName = Table.AddColumn(InsertDayInt, "MonthName", each Date.ToText([Date], "MMMM", Culture), type text),
InsertCalendarMonth = Table.AddColumn(InsertMonthName, "MonthInCalendar", each (try(Text.Range([MonthName],0,3)) otherwise [MonthName]) & " " & Number.ToText([Year])),
InsertCalendarQtr = Table.AddColumn(InsertCalendarMonth, "QuarterInCalendar", each "Q" & Number.ToText([QuarterOfYear]) & " " & Number.ToText([Year])),
InsertDayWeek = Table.AddColumn(InsertCalendarQtr, "DayInWeek", each Date.DayOfWeek([Date])),
InsertDayName = Table.AddColumn(InsertDayWeek, "DayOfWeekName", each Date.ToText([Date], "dddd", Culture), type text),
InsertWeekEnding = Table.AddColumn(InsertDayName, "WeekEnding", each Date.EndOfWeek([Date]), type date)
in
InsertWeekEnding
in
CreateDateTable

To create the Date table ;

  • Step 1 - Date Function :
    First start with Get Data/Blank query , go to advanced query and  paste  the the data function you copied above. When done rename it to "Dates Query".

  • Step 2 - DateKey Table :
    For the datekey table we will use the date function (Dates Query) we just defined to generate a dynamic date table. Again we start with Get Data/Blank Query/ Advanced Query Editor. Definition of the DateKey Table will be

let
Source = #"Dates Query"(Date.FromText(List.Min(Summaries[BillingMonth])),DateTime.Date(DateTime.LocalNow()))
in
Source

dateKeytable

Once we have the DateKey table defined we need to setup relationship between usage and DateKey so we can apply time intelligence.Our relationship will be between Date column in DateKey table to Date column in Usage table.

Calculated Columns

We will extend our Usage table with 2 new columns. Right click to Usage table and select New Column;

Resource Type we will extract resource type from Instance ID.

Resource Type = PATHITEM(SUBSTITUTE(Usage[Instance ID], "/", "|"), 8)

Resource Name we will extract resource name from Instance ID

Resource Name = PATHITEMREVERSE(SUBSTITUTE(Usage[Instance ID], "/", "|"), 1)

Measure Definitions

We will define some measures to be used in our calculations. For ease of access we will sore them under measures table _MyMeasures.

To create new table empty table by select Enter Data and rename table to _MyMeasures

NewDatatable

Now lets add a few measures by Right click on _myMeasures select new measure and paste the following formulas one by one ;

Total Resource Count

TotalResources = DISTINCTCOUNT(Usage[InstanceId])

Here we refer to 2 new column we created in previous section

Virtual Machine Count

VMCount = CALCULATE(
DISTINCTCOUNT(Usage[Resource Name]),
FILTER(Usage,Usage[Resource Type]="virtualMachines"))

SQL Instance Count

SQLInstanceCount = CALCULATE(
DISTINCTCOUNT(Usage[Resource Name]),
FILTER(Usage,Usage[Resource Type]="servers"&&Usage[Consumed Service]="Microsoft.Sql"))

Dashboards Summary / Inventory / Cost by Service

Now we can use the data we have to visualize the consumption for our EA enrollment.

My summary dashboard

SummaryPage1

Azure Resource Inventory

Inventory1

Kudos to my colleague  Marcel Keller for providing samples for   Inventory View and extracting Resource Name …

Cost by Service

costbyService

You can find the template link  at the end of the blog post.

Enable drillthrough for Consumption data

We can utilize Power BI drill down feature to navigate to usage  details / meter details to analyze the data further in detail.
First create a new page and select Consumed Service as Drillthrough filter
and select one of the services to enable drill down.

sampleDrilldown

Now we can go back to CostbyService dashboard select any Service and Right click / Drillthrough / Details by Service

DrilltroNav

Sample Details Dashboard

DetailsbyService

Using the same we can create detail pages focusing different elements of Azure Usage .

Download Template

Please note that after you open the template you need to cancel data refresh, go to Edit Queries  and change data source for Market

Place/Pricesheets/Summaries  select Advanced Editor  and change the highlighted enrollment number (100) to your own enrollment id.

EnrollmentId 

Also for Usage data  , go to Advanced Editor and change highlighted enrollment number to your own.

enrollmentidusage

Specify your Account Key and Connect

enrollment3 

Template should populate with your usage data after data refresh is completed.

Final Comments for Dashboards

  • Add more slicers to filter the data by Date, Subscription, Location or by Service
  • Add more drillthrough dashboards to control user experience
  • Bring in usage / performance from Log Analytics to correlate with consumption

In Part 2 I will be  focusing on utilizing Azure Tags for usage analysis…

<最新アップデート>SQL Sever 丸わかり1日セミナー [セミナー] 東京開催【12/9更新】

$
0
0

2018年12月11日(火) 13:00-17:30(12:45開場)

日本マイクロソフト株式会社 品川本社 31F セミナールームC+D
東京都 港区 港南 2-16-3 品川グランドセントラルタワー
https://www.microsoft.com/ja-jp/mscorp/branch/sgt.aspx

 

<概要>本セミナーでは開催タイミングの最新の情報をお届けできるようにしております。

お申込みはこちら

 

 

 

ConfigMgr Application Approval via Email

$
0
0

ConfigMgr 1810 introduced the feature to receive email-based notifications for application approval requests.

Here's a step by step configuring the flow for a user requesting an application, an email is sent to the application approver to either Approve or Deny the request.

Prerequisites

  • Turn on the Feature "Approve application requests for users per device"

Configure Email Notification

  • Go to Monitoring > Overview > Alerts > Subscription
  • Click on Configure Email Notification from the ribbon menu.

  • Populate SMTP server information and Port. For Office 365 EXO use smtp.office365.com with port 587
  • Specify a connection account.
  • Specify sender email address for the notification.

Use the Test SMTP Server button to send a test email for validation. Refer the NotiCtrl.log for troubleshooting.

Access SMS Provider over Internet

You may want the approval/deny workflow to work even outside corporate network. It's now possible to access WMI over HTTPS via CMG leveraging the ARM model with AAD User Discovery enabled.

  • On your ConfigMgr console go to Administration > Site Configuration > Servers and Site System Roles
  • Select the Server holding SMS Provider Role. [If unsure, check the Site Properties to confirm]
  • Go to the Properties of SMS Provider and check the box Allow Configuration Manager cloud management gateway traffic for administration service.

  • Back in ConfigMgr console go to Administration > Cloud Services > Azure Services
  • Select the Cloud Management Azure Service and go to its Properties > Applications tab.
  • Make a note of the Native Client App

  • Click the Discovery tab to ensure AAD User discovery is enabled.
  • Back in ConfigMgr console go to Administration > Cloud Services > Cloud Management Gateway
  • Make a note of the CMG Service Name

  • Go to the Azure portal, select Azure Active Directory, and then select App registrations. You may need to click View all applications
  • Search for the Native Client App you noted from ConfigMgr console.
  • Click to open the app and select Settings


  • From the Settings blade select Redirect URIs.


  • In the Redirect URIs blade, paste in the following path: https://<CMG FQDN>/CCM_Proxy_ServerAuth/ImplicitAuth

    [Replace <CMG FQDN> with the CMG Service Name you noted from the ConfigMgr console.]

  • Click Save. Close the Settings pane.


  • In the app properties, select Manifest.


  • In the Edit manifest blade, find the oauth2AllowImplicitFlow property.
  • Change its value to true. For example, the entire line should look like the following line: "oauth2AllowImplicitFlow": true,
  • Select Save.

Deploy Application

Now its time to deploy your desired application to a User Group.

  • Check the box An administrator must approve a request for this application on the device.
  • You can also specify email address of the application owner or approver. This can be unique for each application and supports multiple email addresses.

From here the user requests the Application from Software Center

The approver receives the email notification to Approve/Deny.

When I hover over the approve link, it points to my CMG to access the SMS Provider over Internet.

The application is automatically installed, and the requestor doesn't need any action to take! However for a decline, the user isn't notified and has to check Software Center.


Thanks,

Arnab Mitra

MPNパートナー向けMonthly Webinarのご案内 2018年12月【12/7更新】

$
0
0

マイクロソフトでは、パートナー様を対象としてさまざまな製品やキャンペーン、プログラムなどの最新情報を日々発信しております。
本Webinarでは特にパートナー様に押さえておいてほしいトピックについて、毎月、約 1 時間で完結する内容で解説いたします。

12月度の開催日程は2018 年 12月 14日 13:30 ~ 14:30 となります。

※本Webinarでお伝えする内容は、予告なく変更する場合やアジェンダにより時間が短くなる場合があります。
あらかじめご了承ください。

■アジェンダ(予定)
・MPN 特典: クラウド ビジネス相談センターの紹介

etc

▼ご登録はこちらから (イベント終了後は録画をご覧いただけます)

過去の月例ウェビナーの内容は「月例ウェビナー (MPN Monthly Webinar)」タグでご覧いただけます。

【お客様事例】あいおいニッセイ同和損害保険がMicrosoft Dynamics 365 と UiPath社の RPA で既存業務のデジタルシフトを進める【12/8更新】

$
0
0

MS&ADインシュアランス グループのあいおいニッセイ同和損害保険株式会社は、AD Vision 2021 における重点戦略「デジタル革命に向けたデジタライゼーションの推進」の一環として、「既存業務のデジタルシフト」の実現に向けた取り組みを各社と協力して 2018 年 11 月より開始します。

今回の取り組みを通して、2021 年度に約 138 万時間の余力を創出し、よりクリエイティブな業務に注力できる環境の構築をしていきます。また、現在年間約 1,200 トン使用しているコピー用紙等の大幅削減を目指します。

続きはこちら

Viewing all 36188 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>