Quantcast
Channel: TechNet Blogs
Viewing all 36188 articles
Browse latest View live

パブリック プレビューの IoT Hub デバイス ストリームの導入

$
0
0

今日のセキュリティ優先のデジタル時代では、IoT デバイスへの安全な接続の確保が非常に重要です。IoT 空間の広い運用シナリオおよびメンテナンス シナリオでは、ユーザーおよびサービスがデバイスとのデータのやり取り、ログイン、トラブルシューティングまたはデバイスとのデータの送受信を行うために、エンドツーエンドのデバイス接続に依存しています。

日本語版のポストは、下記の URL よりご覧いただけます。

https://azure.microsoft.com/ja-jp/blog/introducing-iot-hub-device-streams-in-public-preview/

※このポストは、2019 年 1 月 23 日に投稿された Introducing IoT Hub device streams in public preview の翻訳です。

Azure の Announcements 一覧は https://azure.microsoft.com/ja-jp/blog/topics/announcements/ よりご覧いただけます。


Azure Service Bus および Azure Event Hubs の提供状況の拡大

$
0
0

Azure メッセージング チームは、サービス (Azure Service Bus、Azure Event Hubs、および Azure Event Grid) の回復性と可用性の強化に向けて継続的に取り組んでいます。

日本語版のポストは、下記の URL よりご覧いただけます。

https://azure.microsoft.com/ja-jp/blog/azure-service-bus-and-azure-event-hubs-expand-availability/

※このポストは、2019 年 1 月 23 日に投稿された Azure Service Bus and Azure Event Hubs expand availability の翻訳です。

Azure の Announcements 一覧は https://azure.microsoft.com/ja-jp/blog/topics/announcements/ よりご覧いただけます。

HDInsight Tools for Visual Studio Code の一般提供開始

$
0
0

HDInsight Tools for Visual Studio Code の一般公開をお知らせいたします。開発者は、HDInsight Tools for Visual Studio Code (VSCode) を使用することで、軽量でありながら強力なコード エディターを手にすることができます。

日本語版のポストは、下記の URL よりご覧いただけます。

https://azure.microsoft.com/ja-jp/blog/hdinsight-tools-for-visual-studio-code-now-generally-available/

※このポストは、2019 年 1 月 23 日に投稿された HDInsight Tools for Visual Studio Code now generally available の翻訳です。

Azure の Announcements 一覧は https://azure.microsoft.com/ja-jp/blog/topics/announcements/ よりご覧いただけます。

(Cloud) Tip of the Day: Introducing the new Azure PowerShell Az module

$
0
0

Today's tip...

Starting in November 2018, the Azure PowerShell Az module is available for full public preview. Az offers shorter commands, improved stability, and supports Windows, macOS, and Linux. Az also offers feature parity and an easy migration path from AzureRM.

Az uses the .NET Standard library, which means it runs on PowerShell 5.x and PowerShell 6.x. Since PowerShell 6.x can run on Linux, macOS, and Windows, that means Az is available for all platforms. Using .NET Standard allows us to unify the code base of Azure PowerShell with minimal impact on users.

Az is a new module, so the version has been reset. The first stable release will be 1.0, but the module has feature parity with AzureRm as of November 2018.

Office 365 Weekly Digest | January 20 – 26, 2019

$
0
0

Welcome to the January 20 - 26, 2019 edition of the Office 365 Weekly Digest.

Another eight features were added to the Office 365 Roadmap last week, including a few for SharePoint Online as well as additions for Outlook (web), Teams, and To-Do.

There are several Teams and Customer Immersion Experience events coming up. New this week is a webcast on January 29, 2019 regarding business strategy on trust and privacy and how Microsoft can hep empower customers to differentiate themselves in these areas.

Highlights from last week's blog posts include information on new features to save files to the cloud more easily, the January 2019 update on SharePoint Modernization, updated guidance for upgrading from Skype for Business to Microsoft Teams, and the availability of Office 365 for Mac in the Mac App Store.

Noteworthy items from last week include a Microsoft IT Showcase webinar on how Microsoft is modernizing device management, recent updates in Microsoft Flow, details on the January 2019 release for Office for Mac, and best practices from Microsoft's Cyber Defense Operations Center.

 

OFFICE 365 ROADMAP

 

Below are the items added to the Office 365 Roadmap last week…

 

Feature ID

App / Service

Title Description

Status

Added

Estimated Release

More Info
45971

Office 365

Office.com is becoming the default start page for all Office 365 commercial users Currently in Office 365, there is a setting that allows users to personalize what page they land on when they log into Office 365. Office.com has evolved to pull a user's most relevant apps, documents and places where they are working —all in one place—and consequently will be the default page a user lands on when signing into Office 365 at Office.com

In development

01/22/2019

February CY2019

n / a
45520

To-Do

Microsoft To-Do: file attachments to tasks Add more context to each task by adding file attachments to tasks in To-Do or viewing the file attachments you've added using Outlook Tasks.

Rolling out

01/22/2019

January CY2019

n / a
46094

Teams

Auto Attendant / Call Queues Administration Enhancement The administration of new and existing Auto Attendants or Call Queues for your organization will be migrated to the Teams Admin Center. A new experience will also be introduced for creating Resource Accounts. We will be gradually rolling this out starting in the last week of February 2019, provided the testing with our early adopter customers meets our goals for quality.

In development

01/24/2019

February CY2019

n / a
46105

SharePoint

Column drag and drop If you need to move a column around in a list or library, you can drag the column header to a new location in the view.

In development

01/24/2019

February CY2019

n / a
34338

Outlook

Web

Outlook on the web - New Tasks experience The new Tasks experience has a redesigned look and feel to go with the new Outlook on the web and integration with Microsoft To-Do so you can manage your tasks on-the-go.

In development

01/24/2019

Q2 CY2019

n / a
46103

SharePoint

Sticky column headers For large lists and libraries, the column headers will remain visible as you scroll vertically or horizontally in larger lists and libraries.

In development

01/24/2019

Q2 CY2019

n / a
46104

SharePoint

Add columns between columns You can insert new columns in place between existing columns in a modern list or library view.

In development

01/24/2019

Q2 CY2019

n / a
45063

Office
OneDrive SharePoint

Saving to the cloud - simplified We're making it easier for your users to save their files to Microsoft 365 cloud storage locations. This new experience allows users of Word, Excel & PowerPoint on Windows and macOS to save documents to OneDrive and SharePoint Online more easily. When users go to manually save using Ctrl-S (Windows), Cmd-S (macOS), the Save button in the QAT (Quick Access Toolbar), or App Exit, they will see a new save dialog.

Rolling out

01/25/2019

January CY2019

n / a

 

 

UPCOMING EVENTS

 

Getting Started with Microsoft Teams

When: Tuesday, January 29, 2019 at 7am PT | This 60-minute session introduces you to the key activities needed to get started with Microsoft Teams today. From setting your profile, to running a meeting, users will leave this session with the foundation needed to use Teams with confidence. Check here for sessions in different time zones and other dates. The session is also available on demand at https://aka.ms/teamsgettingstartedondemand.

 

Customer Immersion Experience: Visualizing, Analyzing & Sharing Your Data Without Having to be a BI Expert

When: Tuesday, January 29, 2019 at 9am PT and 12pm PT | This 2-hour hands-on experience will give you the opportunity to test drive the latest business analytics tools. A trained facilitator will guide you as you apply these tools to your own business scenarios and see how they can work throughout your organization. During this interactive session, you will explore how to: (1) Locate and organize large amounts of data from multiple sources, (2) Visualize complex data and identify trends quickly without having to be a BI expert, (3) Find and collaborate with company experts on the fly, even if they work in another part of the country, and (4) Gather colleague's opinions easily and eliminate communication and process bottlenecks. Each session is limited to 12 participants, reserve your seat now.

 

Webcast: Championing privacy rights to drive differentiation

When: Tuesday, January 29, 2019 at 9am PT | As organizations rapidly move to the cloud, it's becoming increasingly important to ensure customer data is private, secure, and managed in accordance with the law of the land. Customers and the legislative bodies who represent them are increasingly aware of the privacy implications of the emerging technologies they place their trust in. Cloud providers owe it to customers to provide visibility into the efforts being taken to help in this journey. Join speakers Alym Rayani, Sr. Director, Microsoft 365; Kurt DelBene, Executive Vice President, Microsoft Corporate Strategy; Bret Arsenault, Microsoft Chief Information Security Officer, Rudra Mitra, Partner Director, Microsoft Engineering on January 29, 2019 at 9:00am PST to learn the latest about business strategy on trust and privacy and how we can help empower customers to differentiate themselves in these areas.

 

Getting Started with Microsoft Teams

When: Wednesday, January 30, 2019 at 1pm PT | This 60-minute session introduces you to the key activities needed to get started with Microsoft Teams today. From setting your profile, to running a meeting, users will leave this session with the foundation needed to use Teams with confidence. Check here for sessions in different time zones and other dates. The session is also available on demand at https://aka.ms/teamsgettingstartedondemand.

 

Getting Started with Microsoft Teams

When: Thursday, January 31, 2019 at 8am PT | This 60-minute session introduces you to the key activities needed to get started with Microsoft Teams today. From setting your profile, to running a meeting, users will leave this session with the foundation needed to use Teams with confidence. Check here for sessions in different time zones and other dates. The session is also available on demand at https://aka.ms/teamsgettingstartedondemand.

 

Make the switch from Skype for Business to Microsoft Teams: End User Guidance

When: Thursday, January 31, 2019 at 10am PT | Designed specifically for Skype for Business end users, this course offers everything you need to help make the transition to Microsoft Teams. We'll focus on the core communication capabilities you use today, chat and meetings, as well as provide an orientation to additional collaboration functionality Teams has to offer. Check here for sessions in different time zones and other dates. The session is also available on demand at https://aka.ms/fromskypetoteamsondemand.

 

Customer Immersion Experience: Connecting, Organizing & Collaborating with Your Team

When: Tuesday, February 5, 2019 at 9am PT and 12pm PT | During this session, you will have the opportunity to experience Windows 10, Office 365 and Microsoft's newest collaboration tool: Microsoft Teams. A trained facilitator will guide you as you apply these tools to your own business scenarios and see how they work for you. During this interactive session, you will explore how to use Microsoft Teams and Office 365 to: (1) Create a hub for team work that works together with your other Office 365 apps, (2) Build customized options for each team, (3) Keep everyone on your team engaged, (4) Coauthor and share content quickly, and (5) Gain skills that will save you time and simplify your workflow immediately. Each session is limited to 12 participants, reserve your seat now.

 

Customer Immersion Experience: Hands-on with security in a cloud-first, mobile-first world

When: Thursday, February 7, 2019 at 9am PT and 12pm PT | This 2-hour hands-on session will give you the opportunity to try Microsoft technology that secures your digital transformation with a comprehensive platform, unique intelligence, and partnerships. A trained facilitator will guide you as you apply these tools to your own business scenarios and see how they work for you. During this interactive session, you will: (1) Detect and protect against external threats by monitoring, reporting and analyzing activity to react promptly to provide organization security, (2) Protect your information and reduce the risk of data loss, (3) Provide peace of mind with controls and visibility for industry-verified conformity with global standards in compliance, (4) Protect your users and their accounts, and (5) Support your organization with enhanced privacy and compliance to meet the General Data Protection Regulation. Each session is limited to 12 participants, reserve your seat now.

 

Customer Immersion Experience: Identity & Access Management and Information Protection

When: Friday, February 8, 2019 at 7am PT and 11am PT | Join us for this online, facilitator-led learning experience, built in an Azure environment. This event is designed to allow you to experience real-world solutions that will secure your employees' mobile devices and applications with Microsoft Enterprise Mobility + Security (EMS). Identity & Access Management and Information Protection are among the biggest challenges that face companies of all sizes today. By attending this online event, you will learn how using EMS and Azure Information Protection provides secure access to your applications, lowers your IT overhead, and protects your data no matter what device your employees and customers are using. During this interactive session, you will explore how to: (1) Learn how using EMS and Azure Information, (2) Protection provides secure access to your applications, (3) Lower your IT overhead, and (4) Protect your data no matter what device your employees and customers are using. Each session is limited to 15 participants, reserve your seat now.

 

Customer Immersion Experience: Identity Driven Security - Keep Pace with Security Challenges

When: Friday, February 15, 2019 at 7am PT and 11am PT | Join us for this live, hands-on learning experience, built in an Azure environment. This event is designed to allow you to experience real-world solutions and learn how to keep your organization protected with holistic enterprise security management tools. Interactions between users, devices, apps, and data have increasingly become more complex with the transition to mobility and cloud, generating new blind spots for IT. During this online experience exploring Microsoft Identity Driven Security solutions, you will experience how to address these issues by protecting the "front door", giving you visibility into user and data activity, and identifying attackers within your organization with behavioral analytics and anomaly detection. During this interactive session, you will explore how to: (1) Learn how to protect the "front door", (2) Identify attackers within your organization with behavioral analytics and anomaly detection, and (3) Gain visibility into user and data activity. Each session is limited to 15 participants, reserve your seat now.

 

BLOG ROUNDUP

 

Save your files to the cloud more easily

To protect against device loss or damage and to provide anywhere access to files, we recommend storing them in Office 365. In June 2018, we announced Known Folder Move (KFM) in OneDrive for customers on Windows 7, 8.1 and Windows 10. Known Folder Move provides an easy way to redirect your desktop documents and folders to OneDrive, making OneDrive the default location for those files. With KFM, your content is automatically synced to OneDrive with no disruption to productivity. We are announcing a new capability that makes it easier for you to create and save your Word, Excel, or PowerPoint document directly to the cloud. When you go to save an Office365 document using Ctrl+S (Windows), Cmd+S (macOS), or the Save button, the new dialog box will default to OneDrive or SharePoint Online. And if you forget to save a new document before exiting, you will also see this updated save experience. Once a document is saved in the cloud, you can easily rename the file and change the location from the title bar. This ability to save your document to the cloud directly from Word, Excel, or PowerPoint will roll out to Office 365 on Windows and Mac beginning in February.

 

January 2019 SharePoint Modernization News

We've been continuing to update and modernize user experiences throughout OneDrive and SharePoint. And as we noted earlier this month, we're making it easier than ever to share these updates with more and more users. We're happy to share news about modern features coming to SharePoint this quarter. Some of these are updates to classic features, while others are brand new. These features / updates include: (1) Bulk check in/out, (2) Signals - visual cues about the status of a file, (3) Column Totals, (4) Sticky Headers, (5) Add Columns in between columns, (6) Column Drag and Drop, and (7) Document sets.

 

Updated guidance for upgrading from Skype for Business to Microsoft Teams

We are excited to announce updates to our online guidance designed to enable customers to plan and implement a successful upgrade from Skype for Business to Microsoft Teams. Incorporating learnings from real customer engagements and various community feedback to enhance and simplify, this new and improved guidance offers: (1) A streamlined end-to-end approach, making it easier to navigate our proven success framework for implementing change, (2) A "Get started" FAQ to highlight quickly the value of upgrading to Teams as well as when and how to take the next step in your journey, (3) A sample upgrade timeline that takes you from the pre-upgrade phase where you will plan and prepare for your upgrade, through the upgrade and into the post-upgrade phase, designed to sustain and amplify your outcomes, (4) A summary of key considerations when preparing to upgrade your tenant and users to Teams Only mode, and (5) Supplemental deep dive technical resources around coexistence, interoperability, and migration. | Related: Introducing Microsoft Teams Rooms | New MS Teams Firstline workers features for Healthcare

 

New interactive video features and deeper integrations with PowerPoint

It's easier than ever to create more visual and immersive experiences for training, learning and communicating. Beginning this quarter, expanded features in Microsoft Stream give teams a new way to seamlessly add quizzes, forms or polling into training videos. The Forms integration into Microsoft Stream helps to make videos more engaging and interactive for learners, while giving trainers a way to understand how well the information is being comprehended. Bring Microsoft Stream videos into presentations with the upcoming embed feature available in PowerPoint. Beginning in February, customers will be able to seamlessly use Stream videos in PowerPoint to enrich content and make learning more impactful.

 

Office 365 for Mac is available on the Mac App Store

Office empowers everyone to achieve more on any device. And Office loves Mac. We're committed to delivering the power and simplicity of Office in an experience designed specifically for Mac, and we continue to make significant investments in the platform. We're excited to announce that Office 365 is now available on the newly redesigned Mac App Store. With one click, Mac users can download the cloud-connected, always-up-to-date version of the Office suite—including full installs of Word, Excel, PowerPoint, Outlook, OneNote, and OneDrive. Office 365 provides experiences tailored to the Mac and macOS, like Dark Mode, Continuity Camera, OneDrive Files on Demand, and Touch Bar support on the MacBook Pro. And it's connected to the cloud, so you can access your content from any device, coauthor with anyone around the world in real-time, and use the power of artificial intelligence (AI) to create more impactful content with less effort. The result is an experience that is unmistakably Office but designed for Mac.

 

NOTEWORTHY

 

Microsoft IT Showcase: How Microsoft is modernizing device management

Format: Video (77 minutes) | Published: January 25, 2019 | Microsoft employees need to work from anywhere, including customer sites, cafes, even airplanes. Our employee mobility, however, puts business data at risk. During this webinar you will learn how Microsoft responds to this data safety challenge by using the Microsoft Enterprise Mobility Security platform.

 

Hey Cortana... introducing our latest Microsoft To-Do integration

Capturing your tasks and reminders just got a whole lot easier. Now, you can capture your tasks with a quick "Hey Cortana." Better yet, your tasks are centralized across Cortana, Outlook Tasks, and Microsoft To-Do – and syncs across all your devices. Read on to see what you can accomplish with Cortana's help.

 

Generate Word documents in your flows

The new Word Online (Business) connector lets you work with Word files in document libraries supported by Microsoft Graph: OneDrive for Business, SharePoint Sites, and Office 365 Groups. Today there are two actions: (1) Convert Word Document to PDF: Gets a PDF version of the selected file, and (2) Populate a Microsoft Word template: Reads a Microsoft Word template to then fill the template fields with selected dynamic values to generate a Word document. | Related: Introducing HTTP and Custom Connector Support for Data Loss Prevention Policies

 

Office for MAC - January 2019 Release details

On January 16th, 2019, Microsoft released Office 365 for Mac Version 16.21 (Build 19011500) in 27 languages. Our Office International team was responsible for translating this release. You may see the following features when you update to it: (1) comments and @mentions in Excel, (2) print slide numbers in handouts, (3) use the Apple continuity camera to insert a photo from your device into a Word document, and (4) apply sensitivity labels to documents in Excel, Outlook and PowerPoint. More information and help content on this release can be found in the MAC section of the What's New in Office 365 page.

 

Monitoring Exchange Online User Client Access and Usage with Graph, PowerShell and Power BI

As a Tenant Admin of an Office 365 Exchange Online organization, have you ever needed to monitor who, what, and where someone is connecting to your Exchange Online resources, like accessing mailboxes on mobile devices Pulling sign-in data from Azure Active Directory (AAD) is a breeze with Graph. After the data is extracted, using Power BI for visualization brings your reporting capabilities to a new level! Let's walk thru a scenario setup where as a Tenant Admin you can find out who is accessing mailboxes in your Exchange Online tenant on mobile devices, using Exchange ActiveSync protocol (which is used by default mail apps on Apple & Android devices) from anywhere in the world.

 

Microsoft's Cyber Defense Operations Center shares best practices

Today, a single breach, physical or virtual, can cause millions of dollars of damage to an organization and potentially billions in financial losses to the global economy. Each week seems to bring a new disclosure of a cybersecurity breach somewhere in the world. As we look at the current state of cybersecurity challenges today, we see the same types of attacks, but the sophistication and scope of each attack continues to grow and evolve. Add to these the threats of nation-state actors seeking to disrupt operations, conduct intelligence gathering, or generally undermine trust. You can download the Cyber Defense Operations Center strategy brief to gain more insight into how we work to protect, detect, and respond to cybersecurity threats.

 

The top 10 articles from 2018

$
0
0

We may be a month into 2019, but there's still one more thing we need to do - check out the top ten articles from last year! A massive thank you to our guest writers and contributors for providing these articles, and we hope to work with writers new and old in 2019 to create even more great reading content for everyone at TechNet UK.

Without further ado, here's the top ten!

 

10. Simplified Lambda Architecture with Cosmos DB and Databricks

Theo van Kraay, Data and AI Solution Architect at Microsoft, returns with a short blog on simplified Lambda Architecture with Cosmos DB, ChangeFeed, and Spark on Databricks. This is an addendum to a prior article, on the topic of implementing a lambda architecture with Azure Cosmos DB.

Read the article

 

9. Data always tells a story

The difference between business intelligence and advanced analytics often attracts conflicting interpretations from users, vendors, and industry commentators. Gavin Payne explores these differences, and why business intelligence and advanced analytics are increasingly different.

Read the article

 

8. Purchase a VM with Azure Reserved Virtual Machine Instances (RIs)

As of November 2017, you’re no longer limited to an operational expenditure model in Azure. By using Azure Reserved Virtual Machine Instances (known as RIs) you can change your cloud payment model to a capital expenditure model.

Read the article

 

7. Build a bot in under 3 minutes… in Azure!

Chat bots are an increasingly popular way for businesses to disclose information in a more conversational way, but where do you even start? Theo van Kraay takes a look at how you can get your own bot up and running in a matter of minutes.

Read the article

 

6. Bing Maps – Distance Matrix API

Jamie Maguire introduces the Bing Distance Matrix API, looking at example code and seeing how it can be applied to real-world applications.

Read the article

 

5. Classifying the UK’s roofs from aerial imagery using deep learning with CNTK

Dome, Dormer or Dutch Gable? Deep neural network! How can we automatically identify the roof-shape of millions of buildings across the UK? In this blog post, we describe how we worked with Ordnance Survey, Britain’s mapping agency, to classify roof types from their geospatial data.

Read the article

 

4. Run your Python script on demand with Azure Container Instances and Azure Logic Apps

Basim Majeed, Cloud Solution Architect at Microsoft, shows how you can host a Python script in Azure Container Instances, and how to then integrate the container in a workflow using Azure Logic Apps.

Read the article

 

3. What are Azure Availability Zones and why should you use them

We take a look at the upcoming Azure Availability Zones, with tips on migration, security and scalability for your Azure projects.

Read the article

 

2. Deploying externally generated Python/R Models as Web Services using Azure Machine Learning Studio

Theo van Kraay takes us through how to deploy an externally trained and serialised sklearn Python machine learning model, or a pre-saved model generated in R, as a web service using Azure Machine Learning Studio.

Read the article

 

1. How to Automate Processing your Azure Analysis Services Models

Stephen Armory, Cloud Solution Architect at Microsoft, provides a detailed, step-by-step look at how you can process your Azure Analysis Services Models.

Read the article

Office 365 para Mac está disponible en Mac App Store

$
0
0

Por Jared Spataro: vicepresidente corporativo para Microsoft 365

Office impulsa a todos para que consigan más desde cualquier dispositivo. Y Office ama a Mac. Estamos comprometidos en entregar el poder y simplicidad de Office en una experiencia diseñada en específico para Mac, y continuaremos con las importantes inversiones en la plataforma. En esta ocasión, queremos anunciarles que Office 365 está disponible en la recién rediseñada Mac App Store. Con un clic, los usuarios de Mac pueden descargar la versión de la suite de Office siempre actualizada y conectada a la nube, incluidas instalaciones completas de Word, Excel, PowerPoint, Outlook, OneNote, y OneDrive.

Office 365 brinda experiencias ajustadas a la Mac y a macOS, como Dark Mode, Continuity Camera, OneDrive Files on Demand, y soporte para Touch Bar en MacBook Pro. Y está conectado a la nube, para que puedan acceder a su contenido desde cualquier dispositivo, realizar coautoría con cualquier persona alrededor del mundo en tiempo real, y utilizar el poder de la inteligencia artificial (IA) para crear más contenido de impacto con menos esfuerzo. El resultado es una experiencia que sin duda es de Office pero diseñada para Mac.

“Nos emociona dar la bienvenida a Microsoft Office 365 a la nueva Mac App Store en macOS Mojave. Apple y Microsoft han trabajado en conjunto para llevar la gran productividad de Office a los usuarios de Mac desde el inicio. Ahora, con Office 365 en Mac App Store, es más fácil obtener la más reciente y mejor versión de Office 365 para Mac, iPad, e iPhone”.

- Phil Schiller, vicepresidente senior de mercadotecnia mundial en Apple

Pueden ver el anuncio de Apple en su centro de noticias.

Descarguen Office 365 desde Mac App Store.*

*Podría tomar hasta 24 horas para que aparezca el bundle de la aplicación en todas las regiones de la Mac App Store.

Fuzzing para-virtualized devices in Hyper-V

$
0
0

Introduction

Hyper-V is the backbone of Azure, running on its Hosts to provide efficient and fair sharing of resources, but also isolation. That’s why we, in the vulnerability research team for Windows, have been working in the background for years now helping secure Hyper-V. And why Microsoft invites security researchers across the globe to submit their vulnerabilities through the Hyper-V Bounty Program for payment of up to $250,000 USD.

To help engage people in the Hyper-V security space, last year internal teams from Microsoft published some of their work.

At BlackHat 2018 USA Joe Bialek and Nicolas Joly presented "A Dive in to Hyper-V Architecture and Vulnerabilities". They covered an architecture overview of Hyper-V oriented to security researchers. They also discussed some interesting vulnerabilities seen in Hyper-V.

In the same conference, Jordan Rabet presented "Hardening Hyper-V through offensive security research", where he discussed in great detail the exploitation process for CVE-2017-0075 in VMSwitch, a Hyper-V component.

Last December Saar Amar published a detailed blog with the fundamentals to get introduced into Hyper-V security research.

Following their work, we’d like to share a new story related to Hyper-V security for anyone interested in getting introduced in Hyper-V security or learning more. Recently we have been working in Virtual PCI (VPCI), one of the para-virtualized devices available in Hyper-V, used to expose hardware to virtual machines. As other para-virtualized devices, it uses VMBus for inter-partition communication.

On this blog we would like to share some of our learnings, introduce both VMBus and VPCI, share one strategy to fuzz the VMBus channel used by VPCI and discuss one of our findings. Some of the concepts and strategies here can be used to work with other virtual devices using VMBus in Hyper-V.

VMBus overview

VMBus is one of the mechanisms used by Hyper-V to offer para-virtualization. In short, it is a virtual bus device that sets up channels between the guest and the host. These channels provide the capability to share data between partitions and setup synthetic devices.

In this section we’ll introduce the VMBus architecture, learn how channels are offered to partitions, and how synthetic devices are setup.

The root partition (or host) hosts Virtualization Service Providers (VSP) that communicate over VMBus to handle devices access requests from child partitions. On the other hand, child partitions (or guests) use Virtualization Service Consumers (VSC) to redirect device requests to the VSP over VMBus. Child partitions require VMBus and VSC drivers to use the para-virtualized device stacks.

VMBus channels allow VSCs and VSPs to transfer data primarily through two ring buffers: upstream and downstream. These ring buffers are mapped into both partitions thanks to the hypervisor, who also provides synthetic interrupts to drive notification between partitions when there is data available.

The architecture can be summarized in the next diagram:

A more detailed introduction to VMBus can be found in the presentations linked before:

Since VMBus allows I/O related data transmission between the potentially malicious guest and the VSP drivers in the host, the later are a prime candidate for vulnerability hunting and fuzzing. A general idea to fuzz virtual devices is finding the VMBus channel available to a VSC and use it to send malformed data to the VSP.

To do so, we need to understand broadly how VMBus channels are made available to VSCs. Let’s start by introducing how the VMBus device is made available to the guest. From a practical point of view, if you deploy a Windows Generation 2 Virtual Machine (enlightened guest) you can find the exposed VMBus device in the Device Manager:

The connection view in Device Manger also reveals that VMBus is exposed to the guest via ACPI. Indeed, its description can be found in the Differentiated System Description Table (DSDT):

Device(_SB.VMOD.VMBS)
{
    Name(STA, 0x0F)
    Name(_ADR, Zero)
    Name(_DDN, "VMBUS")
    Name(_HID, "VMBus")
    Name(_UID, Zero)
    Method(_DIS, 0, NotSerialized)
    {
        And(STA, 0x0D, STA)
    }
    Method(_PS0, 0, NotSerialized)
    {
        Or(STA, 0x0F, STA)
    }
    Method(_STA, 0, NotSerialized)
    {
        Return(STA)
    }
    Name(_PS3, Zero)
    Name(_CRS, ResourceTemplate()
    {
        IRQ(Edge, ActiveHigh, Exclusive) {5}
    })
}

Once VMBus is ready, for every channel offered by the root partition, the guest will build a new node in the device tree. The summarized (and generic) flow is:

  1. The root partition offers a channel.
  2. The offer is delivered to the guest through a synthetic interrupt.
  3. In the guest, because of the interrupt, a bus relation query is injected in the PnP system.
  4. In the guest, the VMBus driver creates a new Physical Device Object (PDO) for the device stack. The information of the offer is saved in the PDO context.
  5. The device driver (for example VPCI), creates a new Functional Device Object (FDO) for the device stack. The routine used to create the FDO objects, for example AddDevice in the case of a Plug and Play driver, is a good point to find the code that allocates and opens the new VMBus channel.

A kernel debugger and the command “!devnode” can be used to list the devices available on the top of VMBus inside a guest:

0: kd> !devnode 0 1
Dumping IopRootDeviceNode (= 0xffffe28c76fbd9e0)
DevNode 0xffffe28c76fbd9e0 for PDO 0xffffe28c76e6b830
  InstancePath is "HTREEROOT"
  State = DeviceNodeStarted (0x308)
  Previous State = DeviceNodeEnumerateCompletion (0x30d)
  .
  .
  .
  DevNode 0xffffe28c76ed19b0 for PDO 0xffffe28c76ecfd80
    InstancePath is "ROOTACPI_HAL000"
    State = DeviceNodeStarted (0x308)
    Previous State = DeviceNodeEnumerateCompletion (0x30d)
    DevNode 0xffffe28c76f17c00 for PDO 0xffffe28c76eeed30
      InstancePath is "ACPI_HALPNP0C08"
      ServiceName is "ACPI"
      State = DeviceNodeStarted (0x308)
      Previous State = DeviceNodeEnumerateCompletion (0x30d)
      DevNode 0xffffe28c76e9e8b0 for PDO 0xffffe28c76f52900
        InstancePath is "ACPIACPI0004"
        State = DeviceNodeStarted (0x308)
        Previous State = DeviceNodeEnumerateCompletion (0x30d)
        DevNode 0xffffe28c76f5b8b0 for PDO 0xffffe28c76f54d60
          InstancePath is "ACPIPNP00033&fdac00f&0"
          State = DeviceNodeInitialized (0x302)
          Previous State = DeviceNodeUninitialized (0x301)
        DevNode 0xffffe28c76f5bbe0 for PDO 0xffffe28c76f59c30
          InstancePath is "ACPIVMBus"
          ServiceName is "vmbus"
          State = DeviceNodeStarted (0x308)
          Previous State = DeviceNodeEnumerateCompletion (0x30d)
          .
          .
          .
          DevNode 0xffffe28c78629340 for PDO 0xffffe28c78625c90
            InstancePath is "VMBUS{44c4f61d-4444-4400-9d52-802e27ede19f}{7f7e8f36-7342-4531-a380-d3a9911f80bf}"
            ServiceName is "vpci"
            State = DeviceNodeStarted (0x308)
            Previous State = DeviceNodeEnumerateCompletion (0x30d)
            .
            .

Now that we’ve established VMBus as an interesting attack vector and learned how to use it, we can discuss one of the virtual devices making use of it: VPCI.

Use case: VPCI

VPCI is a virtualized bus driver used to expose hardware to virtual machines. Scenarios using VPCI include SR-IOV and DDA. It’s important to point out that VPCI will be exposed to the guest only if there is a virtual device requiring it (and this must be configured by the host).

In this section we’ll learn how to find the VMBus channel used by VPCI, and how to use it to send arbitrary data to the VSP. We also provide the skeleton of a Windows driver to illustrate the idea.

As previously explained, every para-virtualized device will require a VSC and VSP pair. In the case of VPCI we’ll identify the VSC component as VPCI and the VSP component as VPCIVSP. The VPCI is managed by the vpci.sys driver in the guest. On the other side, vpcivsp.sys manages the VPCIVSP component in the host. For the current analysis we are using vpci.sys version 10.0.17134.228.

Finding the VMBus channel

As we have introduced before, the initialization of a new FDO is a good point to start searching for allocation of VMBus channels.
Since VPCI is a Kernel-Mode Driver Framework (KMDF) driver, we are interested in the call to WdfDriverCreate, and specifically in the DriverConfig parameter:

NTSTATUS WdfDriverCreate(
  PDRIVER_OBJECT         DriverObject,
  PCUNICODE_STRING       RegistryPath,
  PWDF_OBJECT_ATTRIBUTES DriverAttributes, 
  PWDF_DRIVER_CONFIG     DriverConfig,
  WDFDRIVER              *Driver
);

The DriverConfig parameter is interesting because it’s a pointer to a WDF_DRIVER_CONFIG structure, where we can find the EvtDriverDeviceAdd callback function:

typedef struct _WDF_DRIVER_CONFIG {
  ULONG                     Size;
  PFN_WDF_DRIVER_DEVICE_ADD EvtDriverDeviceAdd;
  PFN_WDF_DRIVER_UNLOAD     EvtDriverUnload;
  ULONG                     DriverInitFlags;
  ULONG                     DriverPoolTag;
} WDF_DRIVER_CONFIG, *PWDF_DRIVER_CONFIG;

EvtDriverDeviceAdd is called by the PnP manager to perform device initialization when a new device is found.

In the VPCI case it is FdoDeviceAdd:

During FdoDeviceAdd VPCI will allocate the new VMBus channel with a call to VmbChannelAllocate:


The VmbChannelAllocate prototype can be found in the vmbuskernelmodeclientlibapi.h public header. The pointer to the allocated channel is returned within the third parameter:

/// page VmbChannelAllocate VmbChannelAllocate
/// Allocates a new VMBus channel with default parameters and callbacks. The
/// channel may be further initialized using the VmbChannelInit* routines before
/// being enabled with VmbChannelEnable. The channel must be freed with
/// VmbChannelCleanup.
///
/// param ParentDeviceObject A pointer to the parent device.
/// param IsServer Whether the new channel should be a server endpoint.
/// param Channel Returns a pointer to an allocated channel.
_IRQL_requires_(PASSIVE_LEVEL)
NTSTATUS
VmbChannelAllocate(
    _In_ PDEVICE_OBJECT ParentDeviceObject,
    _In_ BOOLEAN IsServer,
    _Out_ _At_(*Channel, __drv_allocatesMem(Mem)) VMBCHANNEL *Channel
    );

To understand better how the channel is allocated and the reference stored, let’s review first the call to FdoCreateVmBusChannel from FdoDeviceAdd:

__int64 __fastcall FdoDeviceAdd(__int64 a1, __int64 a2)
{
  __int64 v5; // rbx
  signed int v6; // esi
  .
  .
  .
  // WdfObjectGetTypedContextWorker, similar to WdfObjectGetTypedContext
  v5 = (*(__int64 (__fastcall **)(__int64))(WdfFunctions_01015 + 1616))(WdfDriverGlobals); 
  .
  .
  .
  v6 = FdoCreateVmbusChannel((_QWORD *)v5);
  .
  .
  .
 }

The first argument to FdoCreateVmbusChannel is the context of the FDO device. FdoCreateVmbusChannel will call to VmbChannelAllocate and save the reference to the allocated VMBCHANNEL in the stack (local variable):

__int64 __fastcall FdoCreateVmbusChannel(_QWORD *FdoContext)
{
  v1 = FdoContext;
.
.
.
  __int64 vpciChannel; // [rsp+70h] [rbp+10h]
.
.
.
  v5 = VmbChannelAllocate(v3, 0i64, &vpciChannel);

At this point the channel has been allocated but still cannot be used as it must be opened first. A client VSC opens an offered channel with a call to VmbChannelEnable.

The function prototype is also included in the vmbuskernelmodeclientlibapi.h header:

/// page VmbChannelEnable VmbChannelEnable
/// Enables a channel that is in the disabled state by connecting to vmbus and
/// offering or opening a channel (whichever is appropriate for the endpoint
/// type).
///
/// See ref state_model.
///
/// param Channel A handle for the channel.  Allocated by ref VmbChannelAllocate.
_Must_inspect_result_
NTSTATUS
VmbChannelEnable(
    _In_    VMBCHANNEL  Channel
    );

In Windows 10 Redstone 4 (1803) the call to VmbChannelEnable happens also at FdoCreateVmbusChannel. After that, the reference to the channel is saved in the FDO context:

  v5 = VmbChannelEnable(vpciChannel);
  if ( v5 >= 0 )
  {
    v1[3] = vpciChannel;
    return 0i64;
  }

Sending data through the VMBus Channel

Now that we understand how VPCI sets up its VMBus channel, a simple strategy to get a reference and use it for fuzzing is to use an upper filter driver for VPCI.

When the VPCI FDO device stack is created our driver will be called by the PnP manager. At that point, the VMBus channel has been already allocated and enabled by FdoDeviceAdd and we can access it through the VPCI FDO Context.

Let’s see how to do it with a driver. The first step is to provide an INF file to install our filter driver for the VPCI device. The important parts of the INF have been highlighted. Take into account that:

  • wvpci.inf is the INF for the VPCI driver.
  • The VPCI hardware id is VMBUS{44C4F61D-4444-4400-9D52-802E27EDE19F}
;
; BlogDriver.inf
;

[Version]
Signature="$WINDOWS NT$"
Class=System
ClassGuid={4d36e97d-e325-11ce-bfc1-08002be10318}
Provider=%ManufacturerName%
DriverVer=
CatalogFile=BlogDriver.cat

[DestinationDirs]
DefaultDestDir = 12

[SourceDisksNames]
1 = %DiskName%,,,""

[SourceDisksFiles]
BlogDriver.sys  = 1

[Manufacturer]
%ManufacturerName%=Standard,NT$ARCH$

[Standard.NT$ARCH$]
%BlogDriver.DeviceDesc%=Install_Section, VMBUS{44C4F61D-4444-4400-9D52-802E27EDE19F}

[Install_Section.NT]
Include=wvpci.inf
Needs=Vpci_Device_Child.NT
CopyFiles=BlogDriver_Files

[BlogDriver_Files]
BlogDriver.sys

[Install_Section.NT.HW]
Include=wvpci.inf
Needs=Vpci_Device_Child.NT.HW
AddReg=BlogDriver_AddReg

[BlogDriver_AddReg]
HKR,,"UpperFilters",0x00010000,"BlogDriver"

[Install_Section.NT.Services]
Include=wvpci.inf
Needs=Vpci_Device_Child.NT.Services
AddService=BlogDriver,,BlogDriver_Service_Child

[BlogDriver_Service_Child]
DisplayName    = %BlogDriver.SvcDesc%
ServiceType    = 1               ; SERVICE_KERNEL_DRIVER
StartType      = 3               ; SERVICE_DEMAND_START
ErrorControl   = 1               ; SERVICE_ERROR_NORMAL
ServiceBinary  = %12%BlogDriver.sys

[Strings]
ManufacturerName="TestManufacturer"
ClassName=""
DiskName="BlogDriver Source Disk"
BlogDriver.DeviceDesc="Microsoft Hyper-V Virtual PCI Bus (With Filter)"
BlogDriver.SvcDesc="Microsoft Hyper-V Virtual PCI Bus (With Filter)"

Now let’s see the initial skeleton for the filter driver. Some clarifications first:

  • The AddDevice routine creates the filter device object and attaches it to the VPCI FDO. A reference to the VPCI VMBus channel is saved in the device extension to make access easier.
  • In this skeleton all the IRPs are just passed down through the device stack, we do not want to modify VPCI behavior, just access its VMBus channel.

The full skeleton ready to build and play can be found in this repo.
After installing the driver in the guest, the VPCI stack shows our filter driver:

0: kd> !devstack ffff8407f64cbad0
  !DevObj           !DrvObj            !DevExt           ObjectName
  ffff8407f2379de0  DriverBlogDriver ffff8407f2379f30  
> ffff8407f64cbad0  Drivervpci       ffff8407fa4e42f0  
  ffff8407f62e1c90  Drivervmbus      ffff8407f62e2310  00000024
!DevNode ffff8407f2fe26b0 :
  DeviceInst is "VMBUS{44c4f61d-4444-4400-9d52-802e27ede19f}{7f7e8f36-7342-4531-a380-d3a9911f80bf}"
  ServiceName is "vpci"

At this point we are ready to send data and fuzz through the channel. There are several public APIs available for sending packets through a VMBus channel. One of them is VmbChannelSendSynchronousRequest. It is one of the APIs used by VPCI and just requires a reference to the VMBCHANNEL to start working. The declaration is available in the vmbuskernelmodeclientlibapi.h header. We have highlighted where to use the VMBCHANNEL:

/// page VmbChannelSendSynchronousRequest VmbChannelSendSynchronousRequest
/// Sends a packet to the opposite endpoint and waits for a response.
///
/// Clients may call with any combination of parameters. The root may only call
/// this if *Timeout == 0 and the ref VMBUS_CHANNEL_FORMAT_FLAG_WAIT_FOR_COMPLETION
/// flag is not set.
///
/// param Channel A handle for the channel.  Allocated by ref VmbChannelAllocate.
/// param Buffer Data to send.
/// param BufferSize Size of Buffer in bytes.
/// param ExternalDataMdl Optionally, a MDL describing an additional buffer to
///     send.
/// param Flags Standard flags.
/// param CompletionBuffer Buffer to store completion packet results in.
/// param CompletionBufferSize Size of CompletionBuffer in bytes. Must be
///     rounded up to nearest 8 bytes, or else call will fail. On success,
///     returns the number of bytes written into CompletionBuffer.
/// param Timeout Optionally, a timeout in the style of KeWaitForSingleObject.
///     After this time elapses, the packet will be cancelled. If set to a
///     timeout of 0, this packet will not be queued if it does not fit in the
///     ring buffer.
///
/// returns STATUS_SUCCESS
/// returns STATUS_BUFFER_OVERFLOW - The packet did not fit in the buffer and
///     was not queued.
/// returns STATUS_CANCELLED - The packet was canceled.
/// returns STATUS_DEVICE_REMOVED - The channel is being shut down.
_When_(Timeout == NULL || Timeout->QuadPart != 0 ||
       (Flags & VMBUS_CHANNEL_FORMAT_FLAG_WAIT_FOR_COMPLETION) != 0,
       _IRQL_requires_(PASSIVE_LEVEL))
_When_(Timeout != NULL && Timeout->QuadPart == 0 &&
       (Flags & VMBUS_CHANNEL_FORMAT_FLAG_WAIT_FOR_COMPLETION) == 0,
        _IRQL_requires_max_(DISPATCH_LEVEL))
NTSTATUS
VmbChannelSendSynchronousRequest(
    _In_                            VMBCHANNEL      Channel,
    _In_reads_bytes_(BufferSize)    PVOID           Buffer,
    _In_                            UINT32          BufferSize,
    _In_opt_                        PMDL            ExternalDataMdl,
    _In_                            UINT32          Flags,
    _Out_writes_bytes_to_opt_(*CompletionBufferSize, *CompletionBufferSize)
                                    PVOID           CompletionBuffer,
    _Inout_opt_ _Pre_satisfies_(*_Curr_ % 8 == 0)
                                    PUINT32         CompletionBufferSize,
    _In_opt_                        PLARGE_INTEGER  Timeout
    );

There are other APIs publicly available and documented at vmbuskernelmodeclientlibapi.h:

  • VmbPacketSend
  • VmbPacketSendWithExternalMdl
  • VmbPacketSendWithExternalPfns

Before using any of these methods on your driver, remember to link against vmbkmcl.lib:

Searching for references to these methods in VPCI can help to analyze and understand better the interactions with the VSP. Another resource that can be helpful to understand the communication is to read through the Linux Integration Services. The client (VSC) implementation for Linux can be found in pci-hyperv.c.

Finding the entry point of untrusted data in the VSP

In this section we’ll introduce packet processing in the VSP side. We’ll use VPCI as an example to learn how to locate the entry point for handling incoming VMBus packets. We’ll not discuss the details about the Virtual PCI communications though, it is out of the scope for this blog. For this analysis we are using vpcivsp.sys 10.0.17134.228.

For any VMBus endpoint, incoming packets from a channel will trigger the EvtChannelProcessPacket callback, as explained in the documentation available in the vmbuskernelmodeclientlibapi.h header:

/// page EvtVmbChannelProcessPacket EvtVmbChannelProcessPacket
/// b EvtVmbChannelProcessPacket
/// param Channel A handle for the channel.  Allocated by ref VmbChannelAllocate.
/// param Packet This completion context will be used to identify this packet to KMCL when the transaction can be retired.
/// param Buffer This contains the packet which was sent by the opposite endpoint.  It does not contain the VMBus and KMCL headers.
/// param BufferLength The length of Buffer in bytes.
/// param Flags See VMBUS_CHANNEL_PROCESS_PACKET_FLAGS.
/// 
/// This callback is invoked when a packet has arrived in the incoming ring buffer.
/// For every invocation of this function, the implementer must eventually call
/// ref VmbChannelPacketComplete.
///
/// This callback can be invoked at DISPATCH_LEVEL or lower, unless the channel
/// has been configured to defer packet processing to a worker thread.  See
/// ref VmbChannelSetIncomingProcessingAtPassive for more information.
///code
typedef
_Function_class_(EVT_VMB_CHANNEL_PROCESS_PACKET)
_IRQL_requires_max_(DISPATCH_LEVEL)
VOID
EVT_VMB_CHANNEL_PROCESS_PACKET(
    _In_ VMBCHANNEL Channel,
    _In_ VMBPACKETCOMPLETION Packet,
    _In_reads_bytes_(BufferLength) PVOID Buffer,
    _In_ UINT32 BufferLength,
    _In_ UINT32 Flags
    );

The callback for method processing is set with a call to VmbChannelInitSetProcessPacketCallbacks. It’s also declared in vmbuskernelmodeclientlibapi.h:

/// page VmbChannelInitSetProcessPacketCallbacks VmbChannelInitSetProcessPacketCallbacks
/// Sets callbacks for packet processing. Only meaningful if KMCL queue
/// management is not suppressed.  TODO:  Make previous sentence more precise.
///
/// Note that ProcessPacketCallback will be invoked for every packet that
/// is received.  ProcessingCompleteCallback will be invoked every time the
/// ring buffer containing incoming packets transitions from non-empty to empty,
/// after the last invocation of ProcessPacketCallback in a single batch.
///
/// param Channel A handle for the channel.  Allocated by ref VmbChannelAllocate.
/// param ProcessPacketCallback A callback that will be called when a packet is
///     ready for processing.
/// param ProcessingCompleteCallback Optionally, a callback that will be called
///     when processing of a batch of packets has been completed.
///
/// return STATUS_SUCCESS - function completed successfully
/// return STATUS_INVALID_PARAMETER_1 - channel parameter was invalid or in an invalid state(Disabled)
NTSTATUS
VmbChannelInitSetProcessPacketCallbacks(
    _In_ VMBCHANNEL Channel,
    _In_ PFN_VMB_CHANNEL_PROCESS_PACKET ProcessPacketCallback,
    _In_opt_ PFN_VMB_CHANNEL_PROCESSING_COMPLETE ProcessingCompleteCallback
    );

With the above information, the packet processing method for the VPCI VSP can be found easily. On vpcivsp.sys just search for references to VmbChannelInitSetProcessPacketCallbacks. The processing method is VirtualBusChannelProcessPacket:


Analysis of the packet processing is out of scope for the blog, but hopefully the initial hints have been provided for researchers willing to invest in this area.

Fuzzing results. One example - CVE-2018-0965

With the approach explained above we developed a fuzzer to target the packet processing in VPCI. In this section we’ll analyze one of the bugs hit by the fuzzer that has been recently patched and learn the kind of problems that can be found involving inter partition communication through VMBus channels.

CVE-2018-0965 is an RCE belonging to the Tier 1 in the Hyper-V Bounty Program. The reference to the official advisory.

The bug lived in the packet processing method for the VPCI VSP. By diffing (diaphora has been used) against the patched vpcivsp.sys (10.0.17134.285) the method VirtualBusChannelProcessPacket can be identified as modified:


By looking at the changes in VirtualBusChannelProcessPacket the interesting one is found:



The call to VirtualBusLookupDevice has been moved from outside a condition to the inside branch. Let’s review the vulnerable code with more context. First, the interesting code:

void __fastcall VirtualBusChannelProcessPacket(__int64 a1, __int64 a2, __int64 a3, unsigned int a4)
{
  unsigned int v4; // er15
  __int64 v5; // rsi
  __int64 v7; // rax
  struct _KEVENT *v11; // rbx
  int v12; // edi
  unsigned int v13; // ecx
  .
  .
  .
  v4 = a4;
  v5 = a3;
  v13 = *(_DWORD *)v5;
  v7 = VmbChannelGetPointer(a1);
  v11 = (struct _KEVENT *)v7;
  .
  .
  .
  if ( v13 == 1112080407 )
  {
    if ( v11[3].Header.SignalState < 0x10002u )
    {
      v36 = 54;
    }
    else
    {
      if ( v4 < 0x50 )
      {
        v12 = -1073741789;
        v14 = 53;
        goto LABEL_26;
      }
      v45 = VirtualBusLookupDevice(v11, *(_DWORD *)(v5 + 4));
      v46 = (volatile signed __int32 *)v45;
      if ( !v45 )
      {
        v41 = 57;
        goto LABEL_71;
      }
      if ( *(_WORD *)(v5 + 12) <= 0x20u )
      {
        v47 = VirtualDeviceCreateSingleInterrupt(v45, v5, &v69);
        memset(&v73, 0, 0x50ui64);
        ...
        v73 = v47;
        ...
        VmbChannelPacketComplete(v6, &v73, 80i64);
        v34 = v46;
        goto LABEL_50;
      }
      v36 = 56;
    }
  }
.
.
.
  return;

LABEL_50:
  VirtualDeviceDereference(v34, v32, v33);
  return;
}

Now let’s recover the definition of the packet processing callback (EvtVmbChannelProcessPacket) from the public header and rewrite the code above with named arguments:

void __fastcall VirtualBusChannelProcessPacket(VMBCHANNEL Channel, VMBPACKETCOMPLETION Packet, PVOID Buffer,
                                               UINT32 BufferLength, UINT32 Flags)
{
  unsigned int v4; // er15
  __int64 v5; // rsi
  __int64 v7; // rax
  struct _KEVENT *v11; // rbx
  int v12; // edi
  unsigned int v13; // ecx
.
.
.
  v4 = BufferLength;
  v5 = Buffer;
  v13 = *(_DWORD *)v5;
  v7 = VmbChannelGetPointer(Channel);
  v11 = (struct _KEVENT *)v7;
.
.
.
  if ( v13 == 1112080407 )
  {
    if ( v11[3].Header.SignalState < 0x10002u )
    {
      v36 = 54;
    }
    else
    {
      if ( v4 < 0x50 )
      {
        v12 = -1073741789;
        v14 = 53;
        goto LABEL_26;
      }
      v45 = VirtualBusLookupDevice(v11, *(_DWORD *)(v5 + 4));
      v46 = (volatile signed __int32 *)v45;
      if ( !v45 )
      {
        v41 = 57;
        goto LABEL_71;
      }
      if ( *(_WORD *)(v5 + 12) <= 0x20u )
      {
        v47 = VirtualDeviceCreateSingleInterrupt(v45, v5, &v69);
        memset(&v73, 0, 0x50ui64);
        ...
        v73 = v47;
        ...
        VmbChannelPacketComplete(v6, &v73, 80i64);
        v34 = v46;
        goto LABEL_50;
      }
      v36 = 56;
    }
  }
.
.
.
  return;
.
.
.
LABEL_50:
  VirtualDeviceDereference(v34, v32, v33);
  return;
}

It’s worth clarifying that the third parameter, Buffer, points to the attacker-controlled data coming from the VPCI channel. The fourth parameter, BufferLength, is the size of Buffer in bytes.

The local variable identified as v13 is assigned from the first DWORD of the PacketBuf and later compared against the constant 1112080407 (0x42490017). By looking at the Linux Integration Services code the constant can be easily identified as PCI_CREATE_INTERRUPT_MESSAGE2. It means PacketBuf in this case is pointing to a pci_create_interrupt2 struct:

struct pci_message {
  u32 type;
} __packed;

/*
 * Function numbers are 8-bits wide on Express, as interpreted through ARI,
 * which is all this driver does.  This representation is the one used in
 * Windows, which is what is expected when sending this back and forth with
 * the Hyper-V parent partition.
 */
union win_slot_encoding {
  struct {
    u32 dev:5;
    u32 func:3;
    u32 reserved:24;
  } bits;
  u32 slot;
} __packed;

/**
 * struct hv_msi_desc2 - 1.2 version of hv_msi_desc
 * @vector:   IDT entry
 * @delivery_mode:  As defined in Intel's Programmer's
 *      Reference Manual, Volume 3, Chapter 8.
 * @vector_count: Number of contiguous entries in the
 *      Interrupt Descriptor Table that are
 *      occupied by this Message-Signaled
 *      Interrupt. For "MSI", as first defined
 *      in PCI 2.2, this can be between 1 and
 *      32. For "MSI-X," as first defined in PCI
 *      3.0, this must be 1, as each MSI-X table
 *      entry would have its own descriptor.
 * @processor_count:  number of bits enabled in array.
 * @processor_array:  All the target virtual processors.
 */
struct hv_msi_desc2 {
  u8  vector;
  u8  delivery_mode;
  u16 vector_count;
  u16 processor_count;
  u16 processor_array[32];
} __packed;

struct pci_create_interrupt2 {
  struct pci_message message_type;
  union win_slot_encoding wslot;
  struct hv_msi_desc2 int_desc;
} __packed;

It allows us to write the vulnerable code again with more information:

void __fastcall VirtualBusChannelProcessPacket(VMBCHANNEL Channel, VMBPACKETCOMPLETION Packet, PVOID Buffer,
                                               UINT32 BufferLength, UINT32 Flags)
{
  unsigned int v4; // er15
  pci_ceate_interrupt2 *createInterrupt; // rsi
  __int64 v7; // rax
  struct _KEVENT *v11; // rbx
  int v12; // edi
  unsigned int messageType; // ecx
.
.
.
  v4 = BufferLength;
  createInterrupt = Buffer;
  messageType = createInterrupt->message_type.type;
  v7 = VmbChannelGetPointer(Channel);
  v11 = (struct _KEVENT *)v7; // Looks like IDA analysis has misunderstood v7.
.
.
.
  if (messageType == PCI_CREATE_INTERRUPT_MESSAGE2)
  {
    if ( v11[3].Header.SignalState < 0x10002u ) // Looks like IDA analysis has misunderstood v7/v11.
    {
      v36 = 54;
    }
    else
    {
      if ( v4 wslot.slot); 
      v46 = (volatile signed __int32 *)v45;
      if ( !v45 )
      {
        v41 = 57;
        goto LABEL_71;
      }
      if (createInterrupt->int_desc.processor_count <= 0x20u )
      {
        v47 = VirtualDeviceCreateSingleInterrupt(v45, createInterrupt, &v69);
        memset(&v73, 0, 0x50ui64);
        ...
        v73 = v47;
        ...
        VmbChannelPacketComplete(v6, &v73, 80i64);
        v34 = v46;
        goto LABEL_50;
      }
      v36 = 56;
    }
  }
.
.
.
  return;
.
.
.
LABEL_50:
  VirtualDeviceDereference(v34, v32, v33);
  return;
}

As a summary, in the vulnerable version, a PCI_CREATE_INTERRUPT_MESSAGE2 packet with a processor_count bigger than 0x20 can force a flow where VirtualBusLookupDevice is called but, after failing to pass the condition, returns without calling VirtualDeviceDereference.
Let’s check both VirtualBusLookupDevice and VirtualBusDereference in the vulnerable version of vpcivsp.sys. Starting with VirtualBusLookupDevice:

signed __int64 __fastcall VirtualBusLookupDevice(struct _KEVENT *a1, int a2)
{
  struct _KEVENT *v2; // rsi
  int v3; // ebp
  struct _KEVENT *v4; // rbx
  char v5; // di
  signed __int64 v6; // rcx
  _LIST_ENTRY *i; // rax
  signed __int64 v8; // rbx

  v2 = a1 + 2;
  v3 = a2;
  v4 = a1;
  v5 = 0;
  KeWaitForSingleObject(&a1[2], 0, 0, 0, 0i64);
  v6 = (signed __int64)&v4[1].Header.WaitListHead;
  for ( i = v4[1].Header.WaitListHead.Flink; ; i = i->Flink )
  {
    v8 = (signed __int64)&i[-12].Blink;
    if ( i == (_LIST_ENTRY *)v6 )
      break;
    if ( *(_DWORD *)(v8 + 408) == v3 && (*(_DWORD *)(v8 + 1820) & 0x80u) != 0 )
    {
      _InterlockedIncrement((volatile signed __int32 *)(v8 + 200));
      v5 = 1;
      break;
    }
  }
  KeSetEvent(v2, 0, 0);
  return v8 & -(signed __int64)(v5 != 0);
}

We know, from the previous analysis, that:

  • The second argument is the device slot.
  • The first argument has been misunderstood as an _KEVENT. It points to an object that has been saved in the channel context. Most likely a most complex one, that contains a _KEVENT as a field.

Let’s analyze the code again after some renaming:

signed __int64 __fastcall VirtualBusLookupDevice(__int64 a1, int slot)
{
  struct _KEVENT *v2; // rsi
  int v3; // ebp
  __int64 v4; // rbx
  char v5; // di
  signed __int64 v6; // rcx
  _QWORD *i; // rax
  signed __int64 v8; // rbx

  v2 = (struct _KEVENT *)(a1 + 48);
  v3 = slot;
  v4 = a1;
  v5 = 0;
  KeWaitForSingleObject((PVOID)(a1 + 48), 0, 0, 0, 0i64);
  v6 = v4 + 32;
  for ( i = *(_QWORD **)(v4 + 32); ; i = (_QWORD *)*i )
  {
    v8 = (signed __int64)(i - 23);
    if ( i == (_QWORD *)v6 )
      break;
    if ( *(_DWORD *)(v8 + 408) == v3 && (*(_DWORD *)(v8 + 1820) & 0x80u) != 0 )
    {
      _InterlockedIncrement((volatile signed __int32 *)(v8 + 200));
      v5 = 1;
      break;
    }
  }
  KeSetEvent(v2, 0, 0);
  return v8 & -(signed __int64)(v5 != 0);
}
  • The method works with the object pointed by the first argument. Given the name of the method VirtualBusLookupDevice we can guess it is the virtual bus.
  • A _KEVENT within the virtual bus is used for synchronization.
  • A container is stored at offset 32 of the virtual bus object.
  • The main loop is iterating over the container, most likely a list.
  • Within the loop v8 holds the reference to every object within the container.
  • The field at offset 408 is compared against the slot id. The guess is that we are iterating over a list of devices.
  • If a matching device is found, its field at offset 200 is incremented and a reference is returned. The field at offset 200 looks like a reference count and a 32 bits size field.

Let’s go to VirtualDeviceDereference now. As a reminder, the first argument is the pointer returned by VirtualBusLookupDevice (most likely a device):

In the disassembly above, VirtualDeviceDereference decrements the field at offset 200 (identified as a potential reference count before). If the reference count reaches to 0 VirtualDeviceDestroy is called, where the device is freed:

void __fastcall VirtualDeviceDestroy(PVOID P, __int64 a2, __int64 a3)
{
  char *v3; // rbx


  v3 = (char *)P;
  //
  // Lots of things...
  //
  ExFreePoolWithTag(v3, 0x49435056u);
}

To summarize. By sending packets PCI_CREATE_INTERRUPT_MESSAGE2, with a processor_count bigger than 0x20, the device reference count can be overflowed and the device object unexpectedly freed, leading to a dangerous situation if pending references to the device are left… but that is a story for another blog 😊

Closure

We have learned the basics of VMBus, the main component to provide para-virtualized devices in Hyper-V. We have also showed a generic approach to fuzz VMBus channels, using VPCI as example. Finally, we got a deep dive on one of the bugs found recently using this approach.
We hope the information here will be useful for security researches interested in Hyper-V and encourage bug hunting from the security community.

PD: We are always looking for vulnerability researches and security engineers to come help make Windows, Hyper-V, Azure and Linux more secure. If interested, please reach out at wdgsarecruitment@microsoft.com!

Virtualization Security Team.


The Story of SPC

$
0
0

The Beginning.

Our inaugural SharePoint Conference took place in Bellevue, Washington in May 2006, with Keynote speaker Bill Gates calling SharePoint the most “revolutionary” element in all of Microsoft Office, and an eager group of hardcore techies listened as announcements, demonstrations, and previews detailed the future of information sharing with the upcoming release of Office SharePoint Portal Server 2007. SPC06 outlined where we wanted the industry to go, and how we were going to get it there… together. By the end of the week, more than sixty different sessions had discussed hundreds of topics including workflow, collaboration, higher connectivity, tablet computers, and advanced web services. The ideas were audacious, and the bars were set high, but one thing was certain… SPC was here to stay.

SPC08 once again rocked the greater Seattle area. Attendance had more than doubled since the first SPC and a highly energized community began to develop out of the rapidly growing user base. Attendees kicked back at social events, formed running groups, clubs, and other extracurricular activities, and shared stories and laughs at the numerous networking opportunities surrounding the conference. It was an action-packed week of SharePoint and fun, and it remains an extraordinary experience for the SharePoint community who attended.

What happens here, stays here…

In 2009, SPC09 traded in the tree-lined views of the Pacific Northwest for the bright lights of Las Vegas. SharePoint conference was now the main event on the calendar of IT Pros, Developers, and SharePoint stakeholders around the world. The technical aspect of the conference was anchored by the major SharePoint 2010 announcement, which would place SharePoint at the center of a connected intranet and internet, and create a seamless and integrated experience for users. The attitude at SPC was once again charged by the attendees, as partners, sponsors, community members and Microsoft experts came together for the largest SharePoint event yet.

Attendance had again nearly doubled. SharePoint was connecting people and information in an unexpected way. There were sessions, workshops, and labs during the day, followed by dancing and celebrations at night. The atmosphere was palpable, and the combination of new SharePoint material and nonstop social happenings kept the crowds coming back for more. Oh, and did we forget that Huey Lewis showed Sin City that it’s “Hip to be Square” against a backdrop of fireworks in front of screaming SharePoint crowds? No we didn’t. And if you were there, neither did you.

The happiest place on earth.

The gates of the Magic Kingdom at Disneyland opened to the SharePoint community for SPC11 in early October, 2011.  In only its fourth year, SPC was already a Tier 1 Microsoft event, and this year Mickey and the gang would have their hands full as SPC returned bigger and better than ever.  The conference buzzed with major announcements about SharePoint moving to the cloud. Conference regulars showed the ropes to first-time attendees and community members traded their Twitter handles and connected on LinkedIn.

At night, a spontaneous party grew into what would be called “Club SPC” and quickly overflowed into an adjacent hotel space dubbed “Lounge SPC”. Cocktails and business cards were handed around as DJs spun tracks onto the three floors of partygoers. Outside, Disney parades and fireworks erupted throughout the park while the announcement that SPC12 would be returning to Vegas rung out. Once again, SPC had been a resounding success. Driven by strong sessions, visionary speakers, informative videos, and amplified by an enthusiastic community, word was spreading through the social sphere that SharePoint Conference was not be missed.

Share more. Do more.

With attendees from more than 83 countries, SPC12 was all about social. Having recently acquired Yammer, the conference highlighted how enterprise social was changing the way people worked together. It was a perfect match for the conference, which has always been about how people connect. During the keynote, #SPC12 became the top trending tweet on the planet. SPC12 also welcomed a new audience to team up with the Developer and IT Pro audiences: Executives. SPC now boasted over 20,000 minutes of expert content targeted towards the three main SharePoint communities.

Mandalay Bay was illuminated with life as raffles, giveaways, and armfuls of swag by day turned into mouthwatering food, choreographed dancing, and lasting friendships by night. People cheered from balconies and lined the pools as Bon Jovi shook the city at the SPC12 private party, sandwiched between days of deep dive sessions and engaging workshops and labs.

Connect, reimagine, transform…

Connect, reimage and transform. The 2014 SharePoint Conference kicked off again under the bright lights of Las Vegas, NV.

Microsoft took over Venetian Hotel and Resort in Las Vegas, Nevada from March 3-6th for the largest and most comprehensive event on the planet for SharePoint, Yammer, Office 365 and related technologies with a keynote delivered by former President Bill Clinton. #SPC14 launched Office Delve, a renewed enthusiasm for Yammer, and of course the Office Graph.

 

 

Connect. Collaborate. Create.

After a brief 4 year hiatis, the SharePoint Conference returned in the Spring of 2018 to a familiar location, Las Vegas, Nevada at the world famous MGM Grand. The 2018 SharePoint Conference was the first SharePoint Conference that served both an in-person audience of 2500+ attendees, but also an online audience who tuned in to our live streamed keynote as part of our SharePoint Virtual Summit!

SPC18 was 2018’s premier event for SharePoint and related technologies, featuring over 140 sessions (8,400 minutes) of content under one roof. SPC18 was a celebration not only of the return of the SharePoint Conference, but also the community that makes it all possible – in addition to some of the world’s best speakers and authorities on Office 365, SPC18 closed out the event with the world’s best party band the B-52s!

 

It’s time to bring the community back together again!

SPC is back! Join the world’s top SharePoint and Microsoft 365 experts in a setting perfectly suited for our extraordinary community. Now is the time to create your own SharePoint story. A huge variety of topics geared towards our three main audiences have been reengineered to provide groundbreaking solutions for today's and tomorrow's problems.

Bill Gates was right… this is revolutionary. Be a part of it.

 

 

ConfigMgr Current branch (1810+) guidance for the SQL CE levels with various SQL versions

$
0
0

Hi Folks,

So first things first before I bombard with Jargons on SQL to a non-SQL ConfigMgr admin.

What is CE or Cardinality Estimation?

The CE predicts how many rows your query will likely return. The cardinality prediction is used by the Query Optimizer to generate the optimal query plan. With more accurate estimations, the Query Optimizer can usually do a better job of producing a more optimal query plan.

If you are reading this, I hope you might have already come across scenarios in ConfigMgr where you have had to manually change the Cardinality Estimator in SQL to a lower level or Legacy version which makes the performance better at times.

Why does this happen?

The ConfigMgr Provider queries have been written and tested with older version of SQL. So it is possible that a provider query on a new SQL version chooses not so optimal execution plans.

Given the nature of the issue, It was technically not feasible to test innumerable SQL queries for each SQL version and correct the code for each of them.

So what did we do then?

ConfigMgr team decided to simply run the Provider queries on a lower CE level (110) on which they are supposed to be performing well.

How is it implemented?

We simply want ConfigMgr Admins not touching those CE levels and hence we let ConfigMgr code do the best for us.

On the remote providers you will now see a registry

UseLegacyCardinality -> Set to 1

clip_image001

 

 

SQL Server version

Supported compatibility level values

Current behavior with ConfigMgr 1810+

SQL Server 2016+

140 (SQL 2017), 130 (SQL 2016), 120 (SQL 2014), 110 (SQL 2012)

  • Local and Remote providers will use the OPTION(USE HINT('FORCE_LEGACY_CARDINALITY_ESTIMATION'))
  • In 1810, the DB will be changed to 130 if we are running SQL 2016 at setup time. (We don’t change it if they are on SQL 2017.)
  • This means right after 1810 install the Admin UI/Providers will issue queries at 110, but the “ConfigMgr backend” will use 130 (or 140)
  • The main reason people change to 110 is for Admin UI performance, this should make it so people don’t have to do anything, yet the backend will be set to the “native” SQL level.
  • In 1902 we will stop changing SQL 2016 to 130 at setup time, just in case they changed it back to 110 on purpose to allow their backend queries to run at 110.

 

 

SQL Server 2014

120, 110

  • The DB will be set to 110 at setup time (since 120 is so bad all around).
  • So all queries Admin UI/Provider and “ConfigMgr backend” will run at 110.

 

We still support running the DB at 110 as per this KB: https://support.microsoft.com/en-us/help/3196320

We just hope most people won’t need to do this anymore after 1810.

 

Having said all the above, there are still some Provider queries that seems to be performing bad at 110 and better at the latest CE level.

OR

The “ConfigMgr backend” queries which run better at the legacy CE level than the latest CE level.

 

What options are there for folks who want to run Provider queries with latest CE level after 1810?

  • Change the above UseLegacyCardinality reg key to 0 would make it to use the current CE level set.

 

What options for making the “ConfigMgr Backend” queries with legacy cardinality ?

ALTER DATABASE <CM_DB>
SET COMPATIBILITY_LEVEL = 110;
GO

Note In the above example, replace <CM_DB> with your Configuration Manager site database name. To change the CE compatibility level to a different level, change the value in SET COMPATILIBTY_LEVEL

OR

ALTER DATABASE SCOPED CONFIGURATION SET LEGACY_CARDINALITY_ESTIMATION = ON

 

Hope it helps!

~UK

Support Escalation Engineer | Microsoft System Center Configuration Manager

Disclaimer: This posting is provided “AS IS” with no warranties and confers no rights.

Azure Backup で PowerShell と Azure Files の ACL をサポート

$
0
0

執筆者: Vishnu Charan TJ (Program Manager II, Azure Backup)

このポストは、2019 1 22 日に投稿された Azure Backup now supports PowerShell and ACLs for Azure Files の翻訳です。

 

Microsoft Azure ファイル共有Azure Backup でネイティブにバックアップできる新しい機能セットを公開しました。今回リリースするバックアップ関連機能は、すべて Azure File Sync に接続されたファイル共有にも対応しています。

NTFS ACL 対応の Azure ファイル

New Technology File System (NTFS) のアクセス制御リスト (ACL) を保持および復元する Azure Backup の機能をプレビュー版で公開しました。2019 年に入り、Azure Backup でのファイル共有のバックアップ時にファイルの ACL を自動で取得できるようになりました。以前の状態に戻す必要がある場合は、ファイルやフォルダーと共にファイルの ACL も復元できます。

PowerShell Azure Backup を操作

PowerShell を使用して Azure ファイル共有のバックアップをスクリプトで実行できるようになりました。バックアップの構成、オンデマンド バックアップの作成、Azure Backup が保護するファイル共有からのファイル復元などを PowerShell コマンドで実行できます。

スナップショットを 10 年間保持可能なオンデマンド バックアップも、PowerShell から実行できます。スケジューラを使い、保持期間を選択して PowerShell スクリプトをオンデマンドで実行することで、スナップショットを定期的 (週次、月次、年次) に作成することも可能です。Azure Backup を使ったオンデマンド バックアップの制限については、こちらをご覧ください。

サンプル スクリプトが必要な場合は AskAzureBackupTeam@microsoft.com にお問い合わせください。定期バックアップのスケジュール設定と、バックアップを最大 10 年間保持できる Azure Automation Runbook のサンプル スクリプトをご用意しています。

バックアップの管理

昨年、バックアップの管理機能を大幅に改善し、Azure Files ポータルから直接開始できるようになりました。Azure Backup でファイル共有の保護を設定すると、その直後に Azure Files ポータルの [Snapshots] ボタンが [Manage backups] に変わります。

[Manage backups] では、オンデマンド バックアップの作成、ファイル共有、個別ファイル、個別フォルダーの復元、バックアップのスケジュール設定ポリシーの変更が可能です。また、ファイル共有をバックアップする Recovery Services コンテナーにアクセスしたり、Azure ファイル共有のバックアップ ポリシーを編集したりすることもできます。

メール アラート

Azure ファイル共有のバックアップと復元済みジョブに関するバックアップ アラートを開始しました。この機能により、失敗したジョブを特定のメール アドレスに通知することができます。

ベスト プラクティス

ストレージ アカウント、ファイル共有、Azure Backup で作成したスナップショットでよく発生する問題が、不注意によるデータの削除です。Azure Backup が有効なストレージ アカウントをロックして、復元ポイントが削除されないようにすることをお勧めします。また、保護されているファイル共有や Azure Backup で作成したスナップショットを削除する前に警告が表示されるようになったため、不注意にデータが削除されるのを防止できます。

関連資料

 

 

MS クラウド ニュースまとめ – Service Bus および Event Hubs の Availability Zones の一般提供を開始 他 (2019 年 1 月 23 日)

$
0
0

執筆者: Cloud Platform Team

このポストは、2019 1 23 日に投稿されCloud Release Announcements for January 23, 2019 の翻訳です。

 

Azure API Management | API Management OpenAPI Version 3 サポートのプレビューを提供

OpenAPI 仕様 (英語) (旧称 Swagger) は、REST API のプログラミング言語に依存しない標準インターフェイス定義です。

このプレビュー版のリリースにより、Azure API Management は既にサポートしているこの仕様の Version 1.2 Version 2.0 に加えて、Version 3 をサポートします。この機能の実装は OpenAPI.NET SDK (英語) に準拠します。

詳細については、こちらのブログ記事 (英語) を参照してください。

Availability Zones | Service Bus および Event Hubs Availability Zones の一般提供を開始

以下のリージョンの Azure Service Bus Premium および Azure Event Hubs Standard Availability Zones のサポートを追加しました。

  • 米国東部 2
  • 米国西部 2
  • 西ヨーロッパ
  • 北ヨーロッパ
  • フランス中部
  • 東南アジア

これにより、ゾーン冗長データセンターのあるすべての Azure リージョンの Azure Service Bus Premium および Azure Event Hubs Standard での Availability Zones のサポートが一般提供となりました。なお、この機能は既存の名前空間では機能しません。この機能を使用するには、新しい名前空間を用意する必要があります。

また、以下のリージョンで Azure Service Bus Premium レベルのサポートを追加しました。

  • 中国北部 2
  • 中国東部 2
  • オーストラリア中部
  • オーストラリア中部 2
  • フランス中部
  • フランス南部

Premium レベルでは専用リソースをプロビジョニングできるので、予測可能な価格モデルでワークロードの高い予測可能性とパフォーマンスを確保できるほか、Availability Zonesgeo ディザスター リカバリー、Virtual Network Service Endpoints などの高度なエンタープライズ向け機能を利用できます。

Azure Database | Amazon RDS for MySQL から Azure Database for MySQL へのオンライン移行のサポート

Azure Database Migration Service の一般提供機能を使用して、Amazon RDS for MySQL Azure Database for MySQL に元のデータベースをオンライン状態のまま移行できます。Azure Database Migration Service を使用して最小限のダウンタイムでオンライン移行を実行する方法の詳細については、チュートリアル「DMS を使用して MySQL をオンラインの Azure Database for MySQL に移行する」を参照してください。

Azure Database | Amazon RDS for PostgreSQL から Azure Database for PostgreSQL へのオンライン移行のサポート

Azure Database Migration Service の一般提供機能を使用して、Amazon RDS for PostgreSQL Azure Database for PostgreSQL に元のデータベースをオンライン状態のまま移行できます。Azure Database Migration Service を使用して最小限のダウンタイムでオンライン移行を実行する方法の詳細については、チュートリアル「DMS を使用して PostgreSQL をオンラインで Azure Database for PostgreSQ に移行する」を参照してください。

Azure SQL Database | Amazon RDS for SQL Server から Azure SQL Database へのオンライン移行のサポート

Azure Database Migration Service の一般提供機能を使用して、Amazon RDS for SQL Server Azure SQL Database に最小限のダウンタイムで移行できます。Azure Database Migration Service を使用して Amazon RDS for SQL Server から Azure SQL Database へのオンライン移行を最小限のダウンタイムで実行する方法の詳細については、チュートリアル「DMS を使用して SQL Server を Azure SQL Database にオンラインで移行する」を参照してください。

HDInsight Tools – 新機能

Azure HDInsight Tools for VSCode

Azure HDInsight Tools for VSCode の一般提供を開始しました。この機能を使用することで、Apache Hive バッチ ジョブ、インタラクティブ Hive クエリ、PySpark ジョブの作成がこれまでにないほど容易になります。HDInsight Tools for VSCode は、クロス プラットフォームに対応した軽量のキーボード向けコード エディターで、プラットフォームの制約や依存関係に縛られることなく使用できます。このため、WindowsLinuxMac のいずれでもスムーズに使うことが可能です。

詳細については、こちらのドキュメントブログ記事 (英語) を参照してください。

Spark の診断およびデバッグ ツールキット

HDInsight のリッチな開発およびデバッグ機能に、次のような Spark 開発者向けの機能強化が行われました。

読み取り/書き込みのボトルネックを特定する再生およびヒートマップを使用したジョブ グラフ。
Executor
の使用状況やジョブ実行の効率性を示す Executor の使用状況分析。データ スキュー検出および解析。データのプレビュー、ダウンロード、コピーなどのジョブのデータ管理。

詳細については、こちらのドキュメントブログ記事 (英語) を参照してください。

 

【お客様事例】テトラパックの総合パッケージ技術 – 安全な食品と飲料を農場から食卓へ【1/29更新】

$
0
0

牛は、人間の都合に合わせて牛乳を作っているわけではありません。

1 頭あたり毎日平均 20 ~ 30 リットルの牛乳を生産する酪農場でも、それを紙パックに詰める加工場でも、自然の摂理を避けることはできません。

もし、パック詰めのラインが 1 か所でも故障すれば、数日にわたって工場全体が停止し、保存の利かない大量の牛乳はすぐに傷んでしまいます。その間にも、牛の体内では牛乳が作られ搾乳は続けられているのです。

酪農業界のみならず、食料品店で販売される生鮮食品の加工や包装を行う多くの工場にとって、システムの継続性は喫緊の課題となっています。

パッケージ業界の草分け的存在であるテトラパックでは、食品業界全体の問題となるこのようなシステムの中断を防ぐため、新しいデジタル ツールを採用しました。工場の機器をクラウドに接続して適切な保守タイミングをツールで予測することで、故障を防ぐことが目的です。修理が必要となった場合は、テトラパックのサービス エンジニアが HoloLens のヘッドセットを装着して、遠く離れた場所から機器の問題を迅速に診断して修復します。

続きはこちら

 

 

 

 

 

 

 

AWS への Windows Server / SQL Server ライセンスの持込(BYOL)について

$
0
0

本資料では、マイクロソフト製品を AWS にデプロイする際に利用可能な選択肢を整理するとともに、 Azure ハイブリッド特典を活用した Azure へのデプロイに関するメリットと選択肢を説明します。

AWS へのデプロイの選択肢

共有サーバー

他の顧客の仮想マシンもホストする共有サーバー上で実行される EC2 インスタンス

ハードウェア専用 インスタンス

特定の1顧客のための EC2 インスタンスのみを実行する物理サーバー
ただし、サーバーのコア数に関する可視性は提供されない

Dedicated Hosts

EC2 インスタンスを実行する物理サーバーであり、完全に特定の1顧客専用 となるサーバー
サーバーのコア数に関する可視性も提供される

* SQL Server の機能を利用可能な AWS RDS のような PaaS サービスに関しては BYOL の選択肢はありません。

保有マイクロソフト製品のクラウドでの利用方法

Azure ハイブリッド特典

Windows または SQL Server の有効なソフトウェアアシュアランスもしくはサブスクリプションを持つ顧客が Azure のみで利用可能な特典

ライセンスモビリティ

顧客がライセンスを保持するソフトウェアを、認定モビリティ パートナーの共有サーバーに移動するためのソフトウェアアシュアランスの特典

 

Windows Server

 

AWS 共有

AWS ハードウェア専有インスタンス

AWS Dedicated Hosts

有効な SA またはサブスクリプションあり

✖

✖

✔

有効な SA またはサブスクリプションなし

✖

✖

✔ *

 * AWS Dedicated Hosts は1顧客専有のインフラとみなされるため、Microsoft 製品条項によれば有効なソフトウェアアシュアランスは必要ではありません。しかしながら、専有インフラ上でライセンスのみを購入して利用してる顧客は、ソフトウェアアシュアランスの特典を利用できません。

SQL Server

 

AWS 共有

AWS ハードウェア専有インスタンス

AWS Dedicated Hosts

有効な SA またはサブスクリプションあり

✔ *

✔ *

✔

有効な SA またはサブスクリプションなし

✖

✖

✔ **

* IaaS デプロイ用のライセンスモビリティが必要
** 有効なソフトウェアアシュアランスまたはサブスクリプションは必要ではありませんが、ソフトウェアアシュアランスの特典を利用できません。

追加の考慮点

  • SQL Server 用の Azure ハイブリッド特典に は 180 日間の移行期間が含まれ、その期間内のオンプレミス(またはソフトウェ アアシュアランスのライセンスモビリティを利用した共有サーバー上)での SQL Server の運用と、Azure 上でのテスト及び 移行作業が可能です。
  • 顧客はコストの最適化と、マイクロソフトサーバー製品への投資の最大化のため、SQL Server とWindows のハイブリッド特典を組合せて利用可能です。
  • SQL Server 用の Azure ハイブリッド特典により、追加費用なしでフェールオーバー発生時に備えてパッシブフェールオーバーインスタンスを実行可能です。
  • SQL Server 用の Azure ハイブリッド特典は、お客様のワークロードをモダナイズするための既存の投資を IaaS、PaaS サービス両方に活用するための Azure だけで利用可能な特典です。
  • Windows Server データセンター用のハイブリッド特典により、顧客はオンプレミスと Azure 両方でライセンスを利用可能 となります。

 

AWS Dedicated Hosts を選択する理由

セキュリティとコンプライアンス

Dedicated Hosts では(ハードウェア専有インスタンスとは違い)、顧客は会社のコンプライアンスや規制の要件に対応するための特別な構成が可能

ライセンスコストの削減

AWS Dedicated Hosts は(ハードウェア専有インスタンスとは違い)、マイクロソフト製品条項の ライセンスを取得したサーバーとみなされ、Windows Server および SQL Server のデプロイのためにソフトウェアアシュアランスまたは有効なサブスクリプションを必要としません。

マイクロソフトは専有型のコンピュートサービスを提供していないものの、Azure 信頼されるクラウド でセキュリティへの取組を明確にしています。

マイクロソフト製品への投資の価値を最大化する方法

Azure と AWS 共有サーバー・AWS ハードウェア専有インスタンスとの比較

Azure と AWS Dedicated Hosts との比較

AWS Dedicated Hosts に関する重要な情報

  • AWS Dedicated Hosts ではインスタンスタイプおよびサイズは一つしか使用できません。Dedicated Hosts 上では、顧客は異なるサイズや異なるタイプの VM を利用できないことを意味します。
  • AWS EC2 共有サーバーの基本コンピュート価格に加え、約 10 %の追加費用が必要です。
  • Dedicated Hosts の時間あたりの費用は実行しているインスタンス数に関係なく課金されます。1 インスタンスだけ実行していても、最大インスタンス数まで実行しても、10 %の追加費用込の時間料金が常に課金されます。

 

Tip of the Day: Microsoft Edge: Making the web better through more open source collaboration

$
0
0

Today's tip...

For the past few years, Microsoft has meaningfully increased participation in the open source software (OSS) community, becoming one of the world’s largest supporters of OSS projects. We've announced that we intend to adopt the Chromium open source project in the development of Microsoft Edge on the desktop to create better web compatibility for our customers and less fragmentation of the web for all web developers.

As part of this, we intend to become a significant contributor to the Chromium project, in a way that can make not just Microsoft Edge — but other browsers as well — better on both PCs and other devices.

Over the next year or so, we’ll be making a technology change that happens “under the hood” for Microsoft Edge, gradually over time, and developed in the open so those of you who are interested can follow along. The key aspects of this evolution in direction are:

  1. We will move to a Chromium-compatible web platform for Microsoft Edge on the desktop.
  2. Microsoft Edge will now be delivered and updated for all supported versions of Windows and on a more frequent cadence.
  3. We will contribute web platform enhancements to make Chromium-based browsers better on Windows devices.

How to work with Inactive Mailboxes

$
0
0

It usually starts with the following question: Is there a way to release the license of an Exchange Online user that left the company, but at the same time, keep the mailbox content? To get this done, we have a feature called Inactive mailboxes, but as we have seen some of our customers being a bit confused about the sequence of steps that needs to be taken to do this correctly, I wanted to cover this scenario.

Scenario

David is a cloud-only user. David currently has an Office 365 Enterprise E5 license. David leaves the company, so as an Admin, I’ll need to remove his account, but I still need to have access to his emails.

Note: In this article, we will provide the steps that have to be taken in order to correctly move a mailbox from the Active state to the Inactive state. Details about how to access the content of an inactive mailbox can be found here.

Before you begin

You have to be connected with PowerShell to Azure Active Directory / Microsoft Online Directory Service (MSODS) and to Exchange Online in order to complete tasks mentioned on this article.

Steps to take

1. Put the mailbox on a hold (which will also place the Archive on the hold, if it is present). For this scenario I’ve used LitigationHold, but, any hold from Exchange Online, or Security and Compliance can be used:

Set-Mailbox David -LitigationHoldEnabled $True -LitigationHoldDuration Unlimited

Note: The hold setting may take up to 60 minutes to take effect.

2. Ensure the mailbox has Litigation Hold enabled:

Get-Mailbox David | fl PrimarySMTPAddress, Identity, LitigationHoldEnabled, LitigationHoldDuration, MailboxPlan, PersistedCapabilities, SKUAssigned

User properties should now show:

PrimarySmtpAddress : David@contoso.com
Identity : David
LitigationHoldEnabled : True
LitigationHoldDuration : Unlimited
MailboxPlan : ExchangeOnlineEnterprise-0527a260-bea3-46a3-9f4f-215fdd24f4d9
PersistedCapabilities : {BPOS_S_O365PAM, BPOS_S_ThreatIntelligenceAddOn, BPOS_S_EquivioAnalytics, BPOS_S_CustomerLockbox, BPOS_S_Analytics, BPOS_S_Enterprise}
SKUAssigned : True

3. Check the number of licenses you have in total/assigned:

Get-MsolAccountSku | fl AccountSkuId, ActiveUnits, ConsumedUnits

Example of what you might get:

AccountSkuId : contoso:ENTERPRISEPREMIUM
ActiveUnits : 25
ConsumedUnits : 3

ConsumedUnits represents the number of licenses that are currently assigned.

4. Remove the Azure Active Directory user, which will move the mailbox to inactive state:

Remove-MsolUser -UserPrincipalName David@contoso.com

5. Check if the mailbox was deleted and become an inactive mailbox:

Get-Mailbox David -InactiveMailboxOnly | fl PrimarySMTPAddress, Identity, LitigationHoldEnabled, LitigationHoldDuration, SKUAssigned, IsInactiveMailbox, IsSoftDeletedByRemove, WhenSoftDeleted

The results should be similar to:

PrimarySmtpAddress : David@contoso.com
Identity : Soft Deleted ObjectsDavid
LitigationHoldEnabled : True
LitigationHoldDuration : Unlimited
SKUAssigned : False
IsInactiveMailbox : True
IsSoftDeletedByRemove : True
WhenSoftDeleted : 6/4/2018 6:42:11 AM

6. Check if the Azure Active Directory user was deleted (you should be able to see it in the list of Deleted users, or you can run a command similar to the one below):

Get-MsolUser -ReturnDeletedUsers -All | where {$_.ProxyAddresses -match "David@contoso.com"} | fl UserPrincipalName, IsLicensed, Licenses

The results should be similar to:

UserPrincipalName : David@contoso.com
IsLicensed : True
Licenses : {contoso:ENTERPRISEPREMIUM}

7. Check the number of licenses you have in total/assigned (the license for the user that is now deleted should be released):

Get-MsolAccountSku | fl AccountSkuId, ActiveUnits, ConsumedUnits

The results should be similar to:

AccountSkuId : contoso:ENTERPRISEPREMIUM
ActiveUnits : 25
ConsumedUnits : 2

Optional (if you want to remove the Azure Active Directory user for good):

8. Wait for 30 days to have the Azure Active Directory user deleted from the Deleted Users list, or run a command similar to the below in order to permanently remove the user:

Get-MsolUser –ReturnDeletedUsers -All | where {$_.ProxyAddresses -match "David@contoso.com"} | Remove-MsolUser -RemoveFromRecycleBin

9. Check if the user still exists in the Active Users, or in Deleted Users (for both commands no results should be returned and you should not see the user within Deleted users anymore):

Get-MsolUser -All | where {$_.ProxyAddresses –match “David@contoso.com”}
Get-MsolUser -ReturnDeletedUsers -All | where {$_.ProxyAddresses -match "David@contoso.com"}

10. Verify that the mailbox is still in the inactive state, and the Litigation Hold is still enabled:

Get-Mailbox David -InactiveMailboxOnly | fl PrimarySMTPAddress, LitigationHoldEnabled, LitigationHoldDuration, SKUAssigned, IsInactiveMailbox

The result should be similar to:

PrimarySmtpAddress : David@contoso.com
LitigationHoldEnabled : True
LitigationHoldDuration : Unlimited
SKUAssigned : False
IsInactiveMailbox : True

References; additional information / more details on:

Thanks to Mark Johnson, Nino Bilic and Murali Natarajan for their support and contribution to this blog post!

Cristian Dimofte

Configuration Manager: ‘The encryption type requested is not supported by the KDC’ Error When Running Reports

$
0
0

___________________________________________________________________________________________________________________________

IMPORTANT ANNOUNCEMENT FOR OUR READERS!

AskPFEPlat is in the process of a transformation to the new Core Infrastructure and Security TechCommunity, and will be moving by the end of March 2019 to our new home at https://aka.ms/CISTechComm (hosted at https://techcommunity.microsoft.com). Please bear with us while we are still under construction!

We will continue bringing you the same great content, from the same great contributors, on our new platform. Until then, you can access our new content on either https://aka.ms/askpfeplat as you do today, or at our new site https://aka.ms/CISTechComm. Please feel free to update your bookmarks accordingly!

Why are we doing this? Simple really; we are looking to expand our team internally in order to provide you even more great content, as well as take on a more proactive role in the future with our readers (more to come on that later)! Since our team encompasses many more roles than Premier Field Engineers these days, we felt it was also time we reflected that initial expansion.

If you have never visited the TechCommunity site, it can be found at https://techcommunity.microsoft.com. On the TechCommunity site, you will find numerous technical communities across many topics, which include discussion areas, along with blog content.

NOTE: In addition to the AskPFEPlat-to-Core Infrastructure and Security transformation, Premier Field Engineers from all technology areas will be working together to expand the TechCommunity site even further, joining together in the technology agnostic Premier Field Engineering TechCommunity (along with Core Infrastructure and Security), which can be found at https://aka.ms/PFETechComm!

As always, thank you for continuing to read the Core Infrastructure and Security (AskPFEPlat) blog, and we look forward to providing you more great content well into the future!

__________________________________________________________________________________________________________________________

 

Introduction

Hello, my name is Richard McIver and I'm a Premier Field Engineer with Microsoft specializing in System Center Configuration Manager.

I was recently working with a customer who suddenly started receiving a strange KDC error when attempting to run Configuration Manager reports from either within the Administration Console or the Reporting Services web portal. It took quite a bit of troubleshooting to isolate the root cause, so I'd just like to share our findings and resolution steps.

 

Problem Description

When running Configuration Manager reports that rely on Role Based Access Control (RBAC), SQL Server Reporting Services (SSRS) will attempt to communicate with Active Directory via Kerberos authentication to resolve the Security Identifier (SID) of the user.

However, when this customer attempted to run reports with RBAC embedded, the following error was displayed and the report failed to load.

The DefaultValue expression for the report parameter 'UserTokenSIDs' contains an error: The encryption type requested is not supported by the KDC. (rsRuntimeErrorInExpression)

The customer environment was SQL Server 2016 Reporting Services running on Windows Server 2012 R2, however I've since been able to replicate this issue on Windows Server 2016 as well.

 

Root Cause Analysis

We eventually traced the root cause down to a security policy settings on the reporting point server that was recently configured via domain Group Policy Object (GPO).

Computer ConfigurationWindows SettingsSecurity SettingsLocal PoliciesSecurity OptionsNetwork security: Configure encryption types allowed for Kerberos: AES128_HMAC_SH1, AES256_HMAC_SHA1, Future encryption types selected

As configured, this setting has the effect of limiting the encryption types allowed for Kerberos authentication from the reporting point server to only AES128, AES256, and Future encryption types.

However, the service account used by the SQL Reporting Services service was not properly configured to support these algorithms. Instead, SSRS was attempting to authenticate using the RC4 encryption type, which is no longer allowed on the server, resulting in the KDC error.

 

Remediation

In this case, the error can be resolved in one of two ways.

  1. Enable AES 128-bit and/or AES 256-bit encryption for the SQL Reporting Services service account
  2. Configure the Network security: Configure encryption types allowed for Kerberos policy setting on the reporting point server to include the RC4_HMAC_MD5 encryption type

Steps to enable AES encryption for the SQL Reporting Services service account

  1. Open Active Directory Users and Computers
  2. Browse to the user account used by SQL Reporting Services on the affected server
  3. Right-click the user account and select Properties
  4. Click on the Account tab
  5. Under Account options, check the box next to one or both of the following
    1. This account supports Kerberos AES 128 bit encryption
    2. This account supports Kerberos AES 256 bit encryption
    1. Click OK

Steps to configure the policy setting Network security: Configure encryption types allowed for Kerberos

Method 1 - Local Security Policy

  1. On the affected server, open an elevated command prompt
  2. Type SECPOL and hit Enter
  3. In the Local Security Policy management console, expand Local Policies and click on Security Options
  4. Scroll down in the let-hand pane until you find the setting Network security: Configure encryption types allowed for Kerberos
  5. Right-click this setting and select Properties
  6. In the Local Security Settings tab, check the box next to RC4_HMAC_MD5, AES128_HMAC_SHA1, AES256_HMAC_SHA1, and Future encryption types
  7. Click OK

Method 2 - Group Policy Object (GPO)

  1. Open the Group Policy Management console and edit a new or existing GPO
  2. In the Group Policy Management Editor, expand Computer ConfigurationPoliciesWindows SettingsSecurity SettingsLocal PoliciesSecurity Options
  3. Right-click on Network security: Configure encryption types allowed for Kerberos and click Properties
  4. On the Security Policy Setting tab, check the box to Define these policy settings
  5. Check the box next to RC4_HMAC_MD5, AES128_HMAC_SHA1, AES256_HMAC_SHA1, and Future encryption types
  6. Click OK

And that's about for now… Hopefully this helps you out, and thanks for reading!

 

References:

Microsoft Power Platform feature updates

$
0
0

Receive important news, articles and feature updates or releases as they pertain to the Microsoft Power Platform. These technical webinar events, available to you as a Partner Network member at no cost, will allow you to learn about the latest features for technologies such as PowerApps, Flow and Common Data Service for Apps.

Adopting the Microsoft Power Platform – Feature Update Series Remain up to date on the latest features released within the Microsoft Power Platform from the past month. Our Microsoft technical experts will present and demonstrate the latest features released within the Business Application Platform such as PowerApps, Flow, Common Data Service for Apps or any combination. You’ll will receive important news, articles and feature updates regarding the Power Platform and have the opportunity to ask questions.

View the full technical journey available to help you build your Microsoft Power Platform by connecting with Microsoft Partner Technical Consultants remotely through technical webinars and consultations: aka.ms/PoweredDeviceTechJourney.

¿Dónde están ahora? Los miembros del Council for Digital Good de Microsoft seis meses después

$
0
0

Por: Jacqueline Beauchere, jefa de seguridad en línea en Microsoft.

Miembros de Council for Digital Good inaugural de Microsoft en Washington, D.C., julio de 2018.

En julio de 2018 concluimos nuestro Council for Digital Good inaugural, una iniciativa que involucra a 15 adolescentes de 12 estados de Estados Unidos, seleccionados para ayudar a progresar en nuestra labor en civilidad digital: promover interacciones más seguras y saludables entre todas las personas. Seis meses después y a unos días del Día Internacional del Internet Seguro 2019, queremos compartir lo que estos impresionantes jóvenes han hecho desde que terminó el periodo de su concejo, así como lo que han planeado para los próximos días.

Desde que terminaron nuestro segundo evento del concejo en Washington, D.C., en julio de 2018, nuestros jóvenes han recopilado sus experiencias en el concejo en las redes sociales y en sus sedes en línea. Christina de Georgia redactó dos textos en su blog para diferentes organizaciones no lucrativas enfocadas en la seguridad en línea (blog #1, blog #2), y varios jóvenes llevaron a cabo sesiones educativas fuera de clase para padres, estudiantes y niños más pequeños. Jazmine, una niña de 14 años con una muy particular visión emprendedora de Kentucky, y uno de los miembros más jóvenes de nuestro concejo, inició su propio sitio web. Y tres miembros del concejo, Bronte, Christina y Judah, recibieron una oportunidad única en la vida de reunirse por segunda ocasión y conversar con la primera dama Melania Trump, en noviembre de 2018, en la conferencia anual del Instituto para la Seguridad Familiar en Línea. (Todos los miembros del concejo pasaron un tiempo en persona con la primera dama en julio de 2018 en D.C.)

Los miembros del concejo se convirtieron en asesores

Casi todos los jóvenes nos han comentado que han utilizado su recién adquirido conocimiento para asesorar a amigos y compañeros de clase que han experimentado riesgos en línea. “Apliqué para el concejo debido a que quería generar un impacto en el ciberbullying en redes sociales”, comentó Erin de Michigan. “A través del concejo, he aprendido que hay muchos más peligros que impactan a los jóvenes a través de diferentes plataformas y, ahora que tengo conocimiento sobre esos temas, puedo compartirlo con los estudiantes y padres de mi comunidad”.

En algunos pocos casos, la exposición a riesgos entre iguales fue bastante serio, e involucraba extorsión sexual o acoso. Después de interactuar en varias ocasiones a través del concejo con la organización no lucrativa Thorn, un joven fue capaz de compartir recursos relevantes con la amiga de un amigo. “Sabía que ellos (Thorn) tenían una línea de ayuda de mensajes de texto y pude direccionarla hacia ellos”, comentó este miembro del concejo. “Después ella no me contactó, lo que creo es una buena señal”.

Los miembros del concejo también han comenzado amplias conversaciones con amigos y familiares sobre serios problemas en línea como extremismo violento. “Algo de lo que he hablado mucho (con amigos) es el proceso de radicalización de los jóvenes en línea para grupos de odio”, comentó un adolescente que ahora está en la universidad. “Es un tema que es poco afortunado y fascinante de discutir. Conversamos sobre la geopolítica involucrada, la sofisticación técnica (de los grupos extremistas), y qué se puede hacer en línea para detenerlos. Hablo de esto gracias a lo aprendido en nuestra llamada con Seguridad Pública de Canadá”.

En el curso de los 18 meses que dura el programa del concejo, tuvimos llamadas mensuales con los jóvenes y sus padres. Invitamos a algunos voceros para que los jóvenes pudieran escuchar y aprender de primera mano de parte de los expertos, como Thorn, sobre diferentes temas relacionados con la seguridad en línea. A finales de 2017, oficiales de Seguridad Pública de Canadá hablaron con los jóvenes sobre el odio y extremismo violento en línea y solicitaron a los miembros del concejo sus reflexiones sobre la mejor manera de llegar a los jóvenes con mensajes de respuesta a este problema que fueran de impacto.

“Para mí, no hay mejor oportunidad que conversar y debatir sobre varios problemas que el internet ha creado con el tiempo”, comentó William del estado de Washington. “Mi parte favorita fue discutir los diferentes problemas y aprender de mis colegas. Extraño poder dar mis ideas a otras organizaciones… siento que contribuía en algo mucho más grande que yo”.

Desde entonces, muchos de los jóvenes nos han dicho que además de extrañarse entre ellos, también extrañan las llamadas mensuales e interactuar con grupos externos y ONG. Otros comentaron que extrañan trabajar en equipo en proyectos como el manifiesto escrito en grupo y su carta abierta a políticos y legisladores. Una de mis respuestas favoritas fue: “Extraño tener una plataforma donde sabía que iba a ser escuchado”.

Hacia el Día del Internet Seguro 2019 y más allá

El Día Internacional del Internet Seguro será el 5 de febrero de 2019, y muchos de nuestros jóvenes planean expandir el mensaje de “Juntos por un internet más seguro” en sus escuelas y comunidades. Más de la mitad de los miembros del concejo planean tener presentaciones para sus Asociaciones de Padres y Maestros, escuelas, clubes u otras organizaciones, y llegarán a educadores, personal administrativo escolar, compañeros y escuelas primarias locales para organizar actividades. Erin de Michigan incluso solicitó que el Día del Internet Seguro y otras importantes ligas web sobre temas de seguridad en línea fueran incluidos en los calendarios escolares y de distrito.

Cada joven elaboró sus propias presentaciones y eligió los temas de discusión para sus eventos del Día del Internet Seguro. Combatir el bullying y el acoso en línea son temas populares, pero muchos también están enfocados en el manejo de la reputación en línea y la huella en línea. “Me apasionan la seguridad en internet y el activismo social”, comentó Indigo de California. “Es importante para mí asegurarme que cada persona esté segura, cómoda y sea respetada. En especial conforme la tecnología y las redes sociales continúan su avance, necesitamos mantener la lucha por estos derechos. El concejo y todas las cosas que hemos discutido están presentes en mí, en especial el aspecto de cómo tu persona y tu reputación en línea sin duda afectarán a tu persona en la vida real”.

Después del 5 de febrero, algunos miembros del concejo comentaron que planean sesiones informativas para padres y otros adultos, dado el impacto que estas personas tienen en la vida de los jóvenes. De acuerdo con una nueva investigación de nuestro más reciente estudio de civilidad digital, ahora más que nunca, los jóvenes alrededor del mundo buscan a sus padres y a otros adultos de confianza para recibir consejo y guía sobre problemas en línea. “Es igual de importante educar a los adultos”, agrega William.

Christina tiene una oportunidad de una pasantía con una organización no lucrativa internacional, y algunos de los jóvenes serán contactados para discutir sus experiencias en el concejo con otras compañías tecnológicas que consideran establecer concejos u otras iniciativas basadas en los jóvenes.

En Microsoft, estamos muy agradecidos con estos jóvenes y con sus padres por lo que nos han dado en los últimos dos años. Como una comunidad global conectada, debemos mejorar la seguridad en línea y las interacciones con jóvenes como estos, que nos impulsan para avanzar.

“Todo lo que puedo hacer es mejorar la manera en la que actúo en línea, y en cómo dejo mi huella en línea”, comentó Bronte de Ohio. “También puedo invitar a mis compañeros de clase, amigos, y familia a que actúen mejor en línea, y a en verdad pensar antes de publicar algo de lo que después podría arrepentirme. Paso a paso, el cambio se puede dar… ¡Tiene que empezar en algún lugar!”

Bronte, estamos de acuerdo contigo.

Conozcan más

Pueden leer el manifiesto conjunto del concejo aquí, así como su carta abierta a los políticos y legisladores de Estados Unidos sobre trabajar en conjunto para mejorar la vida en línea. Para conocer más sobre civilidad digital, visiten: www.microsoft.com/digitalcivility, y para más información sobre seguridad en línea en general, entran a nuestro sitio web y a nuestra página de recursos; den “like” en Facebook y síganos en Twitter.

Office 365: Dynamic distribution lists created by email domains.

$
0
0

Dynamic distribution lists offer excellent mechanisms to email groups of users based on their attribute rather than maintaining manual group membership.  One request I often receive is how can we create a distribution list for everyone that has their primary SMTP address within a certain domain.

By their initial design a dynamic distribution list supports a recipient filter property.  The recipient filter property allows administrators to specify a query of objects to ensure those objects are included in the dynamic distribution list.  With this in mind many administrators will attempt to utilize the recipient filter in attempts to filter a domain to create the dynamic distribution list.  Here is an example:

PS C:> New-DynamicDistributionGroup -Name TestDynamicDL -RecipientFilter {PrimarySMTPAddress -like '*.consoto.com'}
Wildcards cannot be used as the first character. Please revise the filter criteria.
     + CategoryInfo          : NotSpecified: (:) [], TaskArgumentException
     + FullyQualifiedErrorId : [Server=MWHPR06MB2446,RequestId=c6124483-2c49-4542-a783-4177dbe9119e,TimeStamp=1/29/2019
     3:10:33 PM] [FailureCategory=Cmdlet-TaskArgumentException] AE6DCA3
     + PSComputerName        : ps.outlook.com

In the recipient filter the wildcard character cannot be the first character.  This effectively prevents us from querying everyone regardless of alias that contains the domain specified. 

Without the ability to lead off with a wild card character we need to be more creative in how we approach this solution.  I find the easiest and most common recommendation is to utilize a custom attribute that is not being utilized for another purpose.  In an environment where directory synchronization is available the custom attributes are sourced from on premises Active Directory.  Administrators have easy access to the custom attributes through the on-premises Exchange Management Shell.  Utilizing powershell administrators could craft a script that would find all objects with a given primary SMTP address where the designated custom attribute is not set to the predetermaned value.  The custom attribute can then be updated.  The script to perform these operations could be scheduled.  Here is an example:

#Gather all remote mailboxes where the primary SMTP address is *domain.com and where the custom attribute 10 is not already the domain attribute.

PS C:> $mailboxes = Invoke-Command { Get-RemoteMailbox -ResultSize unlimited | where { $_.primarySMTPAddress -like "*domain.org" -and $_.customAttribute10 -ne "DOMAIN" } }

#Iterate through the array and set the custom attribute.

PS C:> $mailboxes | % { Set-RemoteMailbox -identity $_.primarySMTPAddress -CustomAttribute10 "DOMAIN" }

#Iterate through all mailboxes where the custom attribute 10 is set to the defined value and where the primary smtp address is not at the designated domain.

#These are users whos primary SMTP address changed form the designated domain.

PS C:> $mailboxes = Invoke-Command { Get-RemoteMailbox -ResultSize unlimited | where { $_.primarySMTPAddress -notlike "*domain.org" -and $_.customAttribute10 -eq "DOMAIN" } }

#Iterate through the array and NULL the custom attribute.

PS C:> $mailboxes | % { Set-RemoteMailbox -identity $_.primarySMTPAddress -CustomAttribute10 $NULL }

There are some advantages to this approach.

  • The script can be scheduled eliminating administrator intervention.
  • The script pulls the smallest number of objects to be changed with each iteration through filtering.
  • The script utilizes custom attributes which are included in the default replication set with AD Connect.
  • The script changes the on premises value to match the replicated cloud value – ensuring that it is visually easy to determine how the values are derived.

There are some dis-advantages to this approach.

  • Distribution list membership is only updated / modified after the script has executed leading to delays in group membership delays in Office 365.

An alternate approach is to utilize the Azure Active Directory Connect synchronization rules to populate the designated custom attribute.  Using the ad connect rules editor we can create a low priority inbound rule.  The rule type would be a join, operate on a user object in the local active directory, and translate to a person object in Office 365.  We will transform the specified custom attribute using an expression searching for a field that contains a value.  Let us take a look at how this would operate.

The first item to look at is what attribute will be base the filter off of.  If we perform a metaverse search and locate a reference member, we can review the attributes of the user.  In our instance the mail attribute reflects the primary SMTP address of the user. 

image

Having identified the mail attribute as containing the information we want to filter on – we can begin creating the rule to modify the custom attribute.   The synchronization rule editor is utilized for this operation.

The first step is to ensure that we have the direction is set correctly.  In our instance the direction will be INBOUND. 

image

The second step is to select the “Add New Rule” button.  This launches the create inbound synchronization rule editor.

image

The third step is to establish the properties on the description page.  I recommend a name field that accurate describes what the rule is going to do.  I also recommend that we supply an accurate description that allows future administrators to understand what has changed.  The connected system will be the local active directory.  The connected system object type is “User”.  The metaverse object type is “Person”.  The link type is Join.  The precedence should be lower than any of the default rules.  In this case default rules begin at 100, so we will select a precedence of 50. 

image

The forth step is to review a scoping filter.  In this case there are no scopes that will be defined. 

image

The fifth step is to review join rules.  In this case there will be no join rules.

image

The last step is to create the transformation that will stamp the customer attribute 10.  To begin this process select the add transformation button.  Under flow type – select expression in the drop down menu.  Under target attribute – select the target custom attribute desired.  In this example extensionAttribute10.  The source will be based off supported functions for expressions in transformation.  In this case the following syntax:

IIF(InStr([mail],"fortmillems.org")>0,"DOMAINA","")

If the substring of the domain exists in the mail attribute the position within the string is returned otherwise 0.  If the position is returned stamp DOMAINA otherwise stamp nothing.

image

There are some benefits to this approach:

  • There are no scheduled tasks or scripts involved.
  • The user is immediately added as a member of the dynamic DL upon replication.

There are some disadvantages to this approach:

  • I generally discourage rule modification in AD Connect unless absolutely necessary.  Modifying rules can have un-intended consequences, rule changes must be tracked, and build documentation prepared to ensure that any future builds of AD Connect have the same rules.  IE – this can often introduce complications to installations.
  • The on premises attribute no longer matches the cloud attribute for the custom attribute – since the attribute was transformed via a rule in AD Connect.  This sometimes leads to confusion as to the source of values and how to handle future modifications.
  • Modifying the rule set requires that a full ad connect sync cycle be initiated – which may cause outages in the overall sync cycle depending on the size and number of objects to be resynchronized.
  • The proposed rule only works if there’s a single domain in question.

What happens if there are multiple domains.  If there are multiple domains in play the rule expression must be modified.  A switch statement must be utilized.  The switch statement evaluates the mail attribute for each of the mail domains – and stamps the custom attribute with a value.  As with an IIF statement there must be one results returned that is true.  In this sample I set anyone without a domain that we’re interested in making a dynamic distribution group to NODOMAIN.  Here is the example expression:

Switch(InStr([mail],"domainA.org")>0,"DOMAINA",InStr([mail],"domainB.com")>0,"DOMAINB",True,"NODOMAIN")

You could also have the custom attribute stamped with NULL if there was not a matching domain.

Switch(InStr([mail],"domainA.org")>0,"DOMAINA",InStr([mail],"domainB.com")>0,"DOMAINB",True,"")

These are some example methods to utilize on premises attributes to provision dynamic distribution lists in Office 365 based on email domain.

Viewing all 36188 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>