Quantcast
Channel: TechNet Blogs
Viewing all 36188 articles
Browse latest View live

General Data Protection Regulation: GDPR

$
0
0

¿Esta su organización lista para GDPR? ¡Descúbralo hoy!

Tome ventaja de las características que ofrecen los diversos servicios de Microsoft para prepararse para la Regulación de Protección General de Datos (GDPR).

Evalúe la postura de privacidad de su organización, descubra los riesgos y actúe, y aproveche la orientación de los expertos sobre GDPR. Utilice nuestra herramienta gratuita de evaluación en línea para impulsar a su organización hacia el cumplimiento de GDPR: GDPR readiness assesment

Descubra como puede tomar control, administrar el compliance y evitar riesgos con Microsoft 365, descargue y comparta con sus clientes nuestro white paper: Accelerate your GDPR Compliance Journey with MS365.

Conozca lo último sobre la regulación de GDPR, la cual entrará en vigor el 25 de Mayo del presente año, le compartimos los siguientes recursos de apoyo:

GDPR: Partners are ready to help!

Vianey Hernández M.|vianeyh@microsoft.com|latampts@microsoft.com


Building the Microsoft 365 Cloud Vision and Business Case

$
0
0

This is the 2nd post in my "Cloud Adoption Journey – Blog Series"

In this post, we will go deeper on Phase 1 - Building the Vision and Business Case.

Key areas of consideration include:

  1. Creating the vision
  2. Assessing the financial costs and benefits
  3. The changing role of IT
  4. Driving the business forward

Creating the Vision

The start of any successful project is having executive sponsorship and a clear vision and roadmap that includes determining how we will measure success. Some common approaches and resources include:

  1. In my role at the Microsoft Technology Center, we often deliver Strategy Briefings and Architecture Design Sessions to help customers envision the "art of the possible" through a series of targeted presentations, demonstrations, whiteboard sessions, and discussions that map back to the customer's industry, challenges, and unique needs. We then work with customers to prioritize their rollout schedule and develop an actionable plan for moving forward.
  2. The Microsoft FastTrack Center has produced a set of resources to help customers further refine, document, and communicate their vision.
  3. Customers often like to hear from other customers and see examples of how others have succeeded and lessons they have learned along the way. We often facilitate customer conversations through a variety of forums and channels to help customers connect. Many of these stories have been documented and published.

Assessing the Financial Costs and Benefits

A key consideration is often determining what the costs and benefits will be of moving to the cloud.

  1. Microsoft worked with Forrester to assess the potential Total Economic Impact of moving to Office 365.
  2. The Value Discovery Workshop and Customer Immersion Experience resources help provide a more customized view for a customer and considerations related to their industry and key personas.
  3. Key questions at this phase include thinking about:
    • Possible vendor rationalization and tool/platform consolidation by moving to the cloud. Can we eliminate/replace some of these other point solutions to lower our overall total cost of ownership, simplify management, and improve the overall user experience?
    • Avoidance of purchasing additional software/services that may be planned but not yet implemented. Are these new capabilities already included in the targeted platform?
    • Ongoing maintenance and support costs and deferral including hardware, software, patching, updates, upgrades, etc.
    • Resource needs for migration, training, and ongoing support.

The Changing Role of IT

Moving from managing on premises servers and infrastructure represents both a challenge and an opportunity for many IT organizations. As we move some of the "plumbing" to the cloud (e.g. servers, storage, patching, ensuring high availability, backups, etc.) this offers an opportunity to reskill and reallocate our often scarce IT resources to other potentially higher value/impact areas that can drive the business forward and add differentiated value to our business.

For example, services such as email and file storage have become a commodity. While the services themselves are important, they do not differentiate companies. My next blog post in this series will focus on how we can help customers migrate these critical functions to the cloud as this is often a common "lift and shift" type scenario that many customers start with for their initial move into the cloud.

IT organizations need to understand their role in supporting and managing a cloud service. As capabilities such as Office 365 Groups unite multiple backend services, we need to think about how we govern them and who is responsible for them within the organization.

Some key resources to help IT with their planning and education include:

  1. The Microsoft Ignite conference had many sessions led by Microsoft customers, partners, and employees that touched on may of these topics to help with planning before, during, and after the migration. These sessions were recorded and are available on demand for all to access.
  2. Microsoft Virtual Academy includes a variety of online training sessions targeted at various roles in IT.
  3. LinkedIn Learning offers additional training for Office 365 administrators and IT professionals.
  4. Microsoft Hands-on labs provides the ability to gain hands on experience in a self-paced lab environment.
  5. In addition to the targeted per customer engagements at the MTC noted above, we also run a number of hands on workshops for customers and partners to attend. Many of those are regularly updated and listed here across the MTC locations in the United States.
  6. Change management in an "evergreen" cloud environment is very different than managing updates to on-premises servers. While there are a number of resources to help in this area, the Office 365 public roadmap is a key site to leverage.
  7. Connect with your peers online via the Microsoft Tech Community and in person at Office 365 user groups and community events like SharePoint Saturday.

Driving the Business Forward

This is one of the main reasons why many organizations are embracing the cloud - how can IT and the cloud be an enabler for the business? How can we do more than just move things "as is" from on premises to the cloud? Don't get me wrong, moving email and files to the cloud is important and solves many challenges for the business and IT (e.g. storage limits/quotas, improving sharing internally and externally) but the move to the cloud offers us many opportunities to question how we have been doing things for years and look for ways to improve and enhance things.

I look at the move to the cloud as a once in a decade type event. It's like moving from your first apartment/condo/house to your next living space as your personal and family needs change. Does your business still operate the same way now as it did 5 or 10 years ago? Probably not. Use this as an opportunity to find new ways to engage the business and eliminate their need to often work around IT - also commonly referred to as "shadow IT". As you move into your new house, don't just bring along all of your legacy baggage (e.g. the stuff in your garage, attic, or closet that you haven't touched in years). You're moving to a new location with the ability to put a fresh coat of paint on the walls and new opportunities to do things differently - building upon what you have already learned along the way.

The reality is that most business users do not wake up in the morning thinking about how to use a specific IT product or feature better. They think about performing their business function better whether that is in sales, marketing, operations, finance, etc.

While the Office 365 learning center has traditional product centric training and learning resources, I encourage you to not just start there. Yes, people may want to know how to use the new version of an Office product or some of the new capabilities in SharePoint Online. However, let's use this as an opportunity to think about changing how we work and see if there is a better way of doing things. It could be simple things like using a "modern attachment" to send a link to a document stored in OneDrive for Business versus a sending a copy of the document to support co-authoring and versioning. That's a great starting point. Let's think bigger.

The Microsoft 365 Productivity Library is one of my favorite resources to leverage with customers to help look at use case, persona, and scenario based resources with the ability to filter by a particular role/function or industry. For example, if I'm a person in finance, how do I better leverage these capabilities to help me work with my team to prepare for our earnings release? If I'm in sales, how do I work with others on my account team to respond to a customer request for proposal?

The Office 365 adoption content pack enables you to get deep insight into which services are being leveraged and view usage by department and user. This is very helpful for identifying who your early adopters and champions are in the business as well as knowing which areas to focus more on driving more awareness, training, and adoption. Use this information to identify your early successes. Share your internal case studies and wins to build and maintain momentum with the business as you introduce new capabilities.

Moving Forward

Next month's post in this series will go deeper into how to get started with phase 2: enabling the core plumbing and supporting infrastructure to leverage Office 365. Future posts will go beyond that into opportunities to leverage new services and capabilities to transform how your employees work together internally as well as with your customers, suppliers, and other external parties.

Expanded Security & Compliance Technical Journey – New webinars and consultations added!

$
0
0

Do you need technical assistance to build your security and compliance practice? Explore the newly added remote technical webinars and one-to-one consultations now available as part of the Security and Compliance technical journey.


New webinars:

Adopting Microsoft 365 Control and Protect Information (no cost for MPN partners; L300)  

Adopting Microsoft 365 Enterprise-Level Identity Protection (no cost for MPN partners; L300)  

  • Upcoming webinars: 
  • Key outcomes: Provide security services to your customers by exploring Identity Protection within Microsoft 365

Adopting Microsoft 365 Proactive Attack Detection & Prevention (no cost for MPN partners; L300)  

  • Upcoming webinars: 
  • Key outcomes: Explore how to configure Microsoft 365 solutions across the cloud and see how they all work together to protect organizations.

Adopting Microsoft 365 Regulatory Compliance (no cost for MPN partners; L300)  

  • Upcoming webinars: 
  • Key outcomes: Help customers navigate the complexity of data privacy and data protection regulations using Microsoft 365


New consultations:

Microsoft 365 Security & Compliance Starter Kit Consultation (5 partner advisory hours; L100-200)

  • Partner consultation request link: https://aka.ms/M365SecurityConsult
  • Key outcomes: Understand the Microsoft 365 security and compliance offerings that are best suited for specific customer solutions.

Microsoft 365 Enterprise-Level Identity Protection Presales Consultation (unlimited access; L100-200)

  • Partner consultation request link: https://aka.ms/M365SecurityPresales
  • Key outcomes: Receive presales guidance on your customer opportunities for Enterprise-Level Identity Protection solutions for Microsoft 365.

Microsoft 365 Enterprise-Level Identity Protection Deployment Consultation (5 partner advisory hours; L300-400)

  • Partner consultation request link: https://aka.ms/M365SecurityDeployment
  • Key outcomes: As you deploy the Enterprise-Level Identity Protection capabilities of Microsoft 365, receive one-on-one technical guidance.

Looking for something else? Access the full security and compliance technical journey to view all the webinars and consultations available at aka.ms/SecurityTechJourney.

Drive consistency with Azure Stack and CSP

$
0
0

 Robert Kuehfus, Cloud Solutions Architect, One Commercial Partner (OCP)

To best show the process of deployment, this blog will guide you through a sample tenant created through the Cloud Solution Provider (CSP) program to connect a Microsoft Azure Stack deployment (ASDK) to an Azure Subscription. In this scenario, I'm playing the role as a Microsoft Partner with CSP and have a need for a hybrid cloud for one of my customers. In this case my customer would like to do development on-premise and run production in the public cloud. I will be showing several screenshots to help walk you through the process.

To get started, I need to have my CSP customer created with a few services. In the Microsoft Partner Center portal, let's create a new customer and make sure they have Azure Active Directory Basic and an Azure Subscription. In my case, I called the customer OCP Az Stack MSP, and anAzure Subscription called Microsoft Azure Stack Sub.

Note: If you plan to follow along, head over to our Quickstart for evaluating Azure Stack and use the Azure Active Directory and Azure Subscription from above when deploying Azure Stack.

Once your deployment is complete you can verify that you are properly connected by logging into the Azure Stack Administration Portal with your Azure AD credential from above. From here, I verified by looking at the top right once I opened the Administration portal that the directory information was correct after the install.

After I deployed the Azure Stack Development Kit (ASDK), I connected the Azure Subscription created in the tenant (created in CSP) to Azure Stack to pull down Marketplace items by running the following PowerShell.

Add-AzureRmAccount -EnvironmentName "AzureCloud"

Register-AzureRmResourceProvider -ProviderNamespace Microsoft.AzureStack

Import-Module C:AzureStack-Tools-masterRegistrationRegisterWithAzure.psm1

 

$AzureContext = Get-AzureRmContext

$CloudAdminCred = Get-Credential -UserName AZURESTACKCloudAdmin -Message "Enter the cloud domain credentials to access the privileged endpoint"

Set-AzsRegistration `

   -CloudAdminCredential $CloudAdminCred `

   -PrivilegedEndpoint AzS-ERCS01 `

   -BillingModel Development

 

Once the script complete, if you sign in to your Azure subscription from the Azure Portal you should see an Azure Stack resource type “Microsoft.AzureStack/registrations” in the “All resources” view. Also, check out the Activity log and you will find some interesting events initiated by the Azure Bridge.

From Azure Stack Administration Portal, I was able to pull down images and extensions from the Marketplace.

Now that I have the Azure Stack environment properly configured to my Tenant and initial default subscription, it is time to create some offerings. As a CSP, I want to make sure these offerings are linked to their own subscription in Azure Stack to easily track billing and consumption (quotas). In this scenario, my developers for the eStore product need to work in both Azure Stack and Azure.

Back on the Microsoft Partner Portal, I created my users in my customers tenant and assigned them an Azure Active Directory Basic license.

I then logged into my Azure portal, went to Azure Active Directory and created a group and assigned users (more on this later).

Now, let's log into Azure Stack Administration Portal and create some offerings, plans and a subscription for the eStore development team.

  1. Create a new plan called "Standard_IaaS" and select Microsoft.Compute, Microsoft.Network and Microsoft.Storage.
  2. I want to make sure I set some boundaries around how much capacity my development team can use, so I set quotas for Storage, Network and Compute (example below). For each resource type I set a quota.

3

3. Once the quotas are setup, I created the plan “Standard IaaS” and selected my three services and quotas.

4. Next, I created a second plan called "Standard_IaaS_x2_Addon" for my development team to add additional capacity (if needed). In this case, I reused the same quotas I created above

Now that I have two plans for my development team to use, let’s create an offer for the deployment team so they can use these plans in their own subscription. Under Offers, I created a new one called Offer_IaaS and I also made sure to select the standard-iaas base plan. Although my screen shot below does not show it, I did select my standard-iaas-x2-addon plan for the Add-on plans.

Once created, you may notice we do not have a User Subscription assigned to this Offer. So let’s create that next.

Under User Subscription in the Azure Stack Administration Portal, let’s create a new User Subscription and call it eStore Development. I will also configure the User (which will be the owner), the directory tenant and select the Offer I created previously.

 

 

I’m not sure if you noticed (how could you miss it), but when we were in the Offer, there was a warning about it being private. Let's go back and switch that to Public so that our eStore developers can use it.

One other thing, we need to configure access to our Azure Stack for our eStore developers by going to the Default Provider Subscription and configuring the correct level of access. In this example, I'm going to grant them Contributor across the board.

Now if I log into the Azure Stack Portal with one of my eStore developer accounts I do indeed have a subscription available to me, but it initially does not have the Add-On plan.

 

But I can easily add the Standard_IaaS_x2_Addon to my subscription from the “Add plan” button.

Before I start deploying VMs, I want to make sure I have registered the providers for my Subscription so head over to the Resource providers and make sure you have them registered.

Now let's deploy our first VM in our new Azure Stack User Subscription which is associated up to our tenant created in CSP.

And there we have it!

Hopefully this gets you thinking about the possibilities around using Azure, Azure Stack and CSP. One of the next scenarios I plan to test is using the new Remote Desktop Modern Infrastructure (RDMI) with Azure Stack to keep critical workloads on-premise, but move some of the overhead and access to Azure.

Applications and Infrastructure Technical Community

 

Support-Release: (CONNECTORS): Release of Generic Connectors v1.1.830.0

$
0
0

Al

We are happy to announce the release of the MIM 2016 SP1 generic connectors version 1.1.830.

 1.1.830.0
As a note the download is replicating through the ether and pages could be cached ctrl+f5 may be needed

Fixed issues:

  • Resolved ConnectorsLog System.Diagnostics.EventLogInternal.InternalWriteEvent (Message: A device attached to the system is not functioning)
  • In this release of connectors you will need to update binding redirect from 3.3.0.0-4.1.3.0 to 4.1.4.0 in miiserver.exe.config
  • Generic Web Services:
    • Resolved Valid JSON response could not be saved in configuration tool
  • Generic SQL:
    • Export always generates only update query for the operation of deleting. Added to generate a delete query
    • The SQL query which gets objects for the operation of Delta Import, if ‘Delta Strategy’ is ‘Change Tracking’ was fixed. In this implementation known limitation: Delta Import with ‘Change Tracking’ mode does not track changes in multi-valued attributes
    • Added possibility to generate a delete query for case, when it is necessary to delete the last value of multivalued attribute and this row does not contain any other data except value which it is necessary to delete.
    • System.ArgumentException handling when implemented OUTPUT parameters by SP
    • Incorrect query to make the operation of export into field which has varbinary(max) type
    • Issue with parameterList variable was initialized twice (in the functions ExportAttributes and GetQueryForMultiValue)

creating a file windows 10 can’t delete

$
0
0

Microsoft has a LONG history when it comes to Windows.  I am getting familiar with GitHub and will publish my nifty little utility that will write files that Windows 7, 8 and 10 can't delete.  The files can have an extension or no extension.  Because the files use reserved names, they can't be renamed, accessed or deleted using standard tools like the command prompt, powershell or Windows Explorer.

A number of reserved names are shown in files from the command prompt and Windows Explorer - Windows 10 can't delete these files

2018년 3월 마이크로소프트 보안 업데이트 발표

$
0
0

마이크로소프트 보안 업데이트 발표

 

이 알림은 2018년 3월 14일 수요일(한국시각)에 발표된 새로운 보안 업데이트에 대한 개요를 제공하기 위한 것입니다. Microsoft는 Microsoft 제품의 보안 취약성을 해결하기 위해 매월 보안 업데이트를 발표합니다.

 

보안 업데이트 개요

 

Microsoft는 2018년 3월 14일 수요일(한국시각)에 다음과 같은 Microsoft 제품에 영향을 미치는 새로운 보안 업데이트를 발표했습니다.

 

제품군 최대 심각도 최대 영향 연관된 KB 문서 /또는 지원 페이지
Windows 10 Windows Server 2016(Microsoft Edge 포함) 긴급(Edge에 대한 일부 CVE) 원격 코드 실행 Windows 10 1709: 4088776, Windows 10 1703: 4088782, Windows 10 1607: 4088787, Windows 10 RTM: 4088786, Windows Server 2016: 4088787.
Windows 8.1 Windows Server 2012 R2 중요 원격 코드 실행 Windows 8.1 및 Windows Server 2012 R2 월별 롤업: 4088876.

Windows 8.1 및 Windows Server 2012 R2 보안 전용: 4088879.

Windows Server 2012 중요 원격 코드 실행 Windows Server 2012 월별 롤업: 4088877.

Windows Server 2012 보안 전용: 4088880.

Windows RT 8.1 중요 원격 코드 실행 Windows RT 8.1: 4088876.

참고 Windows RT 8.1용 업데이트는 Windows 업데이트를 통해서만 제공됩니다.

Windows 7 Windows Server 2008 R2 중요 원격 코드 실행 Windows 7 및 Windows Server 2008 R2 월별 롤업: 4088875.

Windows 7 및 Windows Server 2008 R2 보안 전용: 4088878.

Windows Server 2008 중요 원격 코드 실행 Windows Server 2008용 업데이트는 누적 업데이트 또는 롤업으로 제공되지 않습니다. 다음 문서에서는 Windows Server 2008 버전을 참조합니다.

4056564, 4073011, 4087398, 4088827, 4088933, 4089175, 4089229, 4089344, 4089453.

Internet Explorer 긴급 원격 코드 실행 Internet Explorer 9 IE 누적: 4089187, Internet Explorer 10 월별 롤업: 4088877, Internet Explorer 10 IE 누적: 4089187, Internet Explorer 11 월별 롤업: 40888754088876, Internet Explorer 11 IE 누적: 4089187, Internet Explorer 11 보안 업데이트: 4088776, 4088779, 4088782, 4088786, 4088787.
Microsoft Office 관련 소프트웨어 중요 원격 코드 실행 각 월별 보안 업데이트 릴리스의 Microsoft Office와 관련된 KB 문서의 번호는 CVE 번호 및 영향받는 구성 요소의 번호에 따라 다릅니다. 이번 달에는 Office 업데이트와 관련된 KB 문서의 수가 20개를 초과하며, 너무 많은 관계로 여기에서는 나열을 생략했습니다. 자세한 문서 정보는 보안 업데이트 가이드의 내용을 살펴보세요.
SharePoint Enterprise Server Project Server 중요 권한 상승 Microsoft SharePoint Server: 4011688, 4011705, 4018293, 4018298, 4018304. Microsoft Project Server 2013: 4018305.
Microsoft Exchange Server 중요 권한 상승 Microsoft Exchange Server: 40733924073537.
.NET Core ASP.NET Core 중요 권한 상승 .NET Core: https://github.com/dotnet/core/.

ASP.NET Core: https://github.com/aspnet/Announcements/issues/.

ChakraCore 긴급 원격 코드 실행 ChakraCore는 Chakra의 핵심이며, Microsoft Edge, 그리고 HTML/CSS/JS로 작성된 Windows 응용 프로그램을 구동하는 고성능 JavaScript 엔진입니다. 자세한 정보는 https://github.com/Microsoft/Cha​kraCore/wiki.
Adobe Flash Player 긴급 원격 코드 실행 Adobe Flash Player KB 문서: 4088785.

Adobe Flash Player 공지: ADV180006.

 

보안 취약성 개요

 

아래 요약 정보에는 이번 발표에서 해결된 취약성의 수가 제품/구성 요소 및 영향별로 분류되어 나와 있습니다.

 

취약성 세부 정보(1) RCE EOP ID SFB DOS SPF 공개적으로 보고됨 알려진 악용 사례 최대 CVSS
Windows 10 1709 2 7 15 2 1 0 0 0 7.4
Windows 10 1703 2 7 15 2 1 0 0 0 7.4
Windows 10 1607 Server 2016 2 8 15 2 1 0 0 0 7.4
Windows 10 RTM 2 5 14 2 1 0 0 0 7.4
Windows 8.1 Server 2012 R2 2 4 14 0 1 0 0 0 7.4
Windows Server 2012 2 4 14 0 1 0 0 0 7.4
Windows 7 Server 2008 R2 2 5 14 0 1 0 0 0 7.4
Windows Server 2008 2 4 14 0 1 0 0 0 7.4
Internet Explorer 2 1 4 0 0 0 0 0 7.5
Microsoft Edge 11 0 5 0 0 0 0 0 4.3
Office 2 9 1 1 0 0 0 0 해당 없음(2)
SharePoint Enterprise Server

Project Server

1 13 1 0 0 0 0 0 해당 없음(2)
Exchange Server 0 1 2 0 0 0 1 0 해당 없음(2)
.NET Core ASP.NET Core 0 1 0 0 2 0 1 0 해당 없음(2)
RCE = 원격 코드 실행 | EOP = 권한 상승 | ID = 정보 유출 SFB = 보안 기능 우회 | DOS = 서비스 거부 | SPF = 스푸핑

 

(1) 구성 요소가 중첩된 취약성은 표에서 두 번 이상 나타날 수 있습니다.

(2) 발표 시점에는 Windows, Internet Explorer 및 Microsoft Edge에 대한 CVE 점수만 확인할 수 있었습니다.

 

보안 업데이트 가이드

 

보안 업데이트 가이드는 Microsoft에서 권장하는 보안 업데이트 정보 리소스입니다. 보기를 사용자 지정하고 영향받는 소프트웨어 스프레드시트를 만들 수 있으며 RESTful API를 통해 데이터를 다운로드할 수도 있습니다. 참고로 이제 기존 보안 공지 웹 페이지 대신 보안 업데이트 가이드가 공식 제공될 예정입니다.

 

보안 업데이트 가이드 포털:  https://aka.ms/securityupdateguide

 

보안 업데이트 가이드 FAQ(질문과 대답) 웹 페이지: https://technet.microsoft.com/ko-kr/security/mt791750

 

보안 업데이트 API 자습서 페이지

 

보안 업데이트 API 데모 비디오 시리즈가 Microsoft 지원 YouTube 채널에 게시되었습니다. 이 시리즈에서는 API에 액세스하는 방법과 API를 사용하여 보안 업데이트 데이터를 검색하는 방법을 소개합니다. 유용하게 활용해 보세요.

 

보안 업데이트 API 자습서 웹 페이지: https://sugapitutorial.azurewebsites.net/.

 

취약성 세부 정보

 

아래에는 이번 발표에 포함된 일부 보안 취약성에 대한 요약 정보가 나와 있습니다. 이러한 특정 취약성은 다음과 같은 하나 이상의 이유로 인해 보안 업데이트 발표에 포함된 더 많은 취약성 집합 중에서 선별한 것입니다. 즉, 1) Microsoft에서 해당 취약성에 대한 질문을 접수했기 때문이거나, 2) 해당 취약성이 업계 언론의 주목을 받았을 수 있기 때문이거나, 3) 해당 취약성이 발표에 포함된 다른 취약성보다 영향력이 클 가능성이 높기 때문입니다. Microsoft는 발표에 포함된 취약성 가운데 일부에 대해서만 요약 정보를 공개하므로 이 요약에 나와 있지 않은 정보를 확인하려면 보안 업데이트 가이드의 내용을 검토해야 합니다.

 

CVE-2018-0886(영문) CredSSP 원격 코드 실행 취약성
요약 CredSSP(자격 증명 보안 지원 공급자) 프로토콜에 원격 코드 실행 취약성이 존재합니다. 이 취약성 악용에 성공한 공격자는 사용자 자격 증명을 릴레이하고 이를 사용하여 대상 시스템에서 코드를 실행할 수 있습니다.

CredSSP는 다른 응용 프로그램에 대한 인증 요청을 처리하는 인증 공급자입니다. 인증을 위해 CredSSP를 사용하는 모든 응용 프로그램은 이 유형의 공격에 취약할 수 있습니다.

이 보안 업데이트는 인증 프로세스 중 CredSSP(자격 증명 보안 지원 공급자) 프로토콜이 요청의 유효성을 검사하는 방식을 수정하여 취약성을 해결합니다.

이 취약성으로부터 완전히 보호받으려면 사용자가 시스템에서 그룹 정책 설정을 사용하도록 설정하고 원격 데스크톱 클라이언트를 업데이트해야 합니다. 그룹 정책 설정은 연결 문제를 방지하기 위해 기본적으로 사용하지 않도록 설정되어 있습니다. 보안 업데이트 가이드에서는 사용자가 문서화된 지침에 따라 보호 기능을 사용하도록 설정할 것을 권장하고 있습니다.

공격 벡터 예를 들어 공격자가 원격 데스크톱 프로토콜에 대해 이 취약성을 악용하는 방식은 다음과 같습니다. 공격자는 이 취약성을 악용하기 위해 특수 제작된 응용 프로그램을 실행하고 원격 데스크톱 프로토콜 세션에 대해 가로채기(man-in-the-middle) 공격을 수행합니다. 이렇게 되면 공격자가 프로그램을 설치하거나, 데이터를 보거나 변경하거나 삭제하거나, 모든 사용자 권한이 있는 새 계정을 만들 수 있습니다.
완화 요소 이 취약성에 대한 완화 요소를 확인하지 못했습니다.
해결 방법 이 취약성에 대한 해결 방법을 확인하지 못했습니다.
영향받는 소프트웨어 지원되는 모든 버전의 Windows
영향 원격 코드 실행
심각도 중요
공개적으로 보고되는지 여부 아니요
알려진 악용 사례인지 여부 아니요
최신 소프트웨어 릴리스에 대한 악용 가능성 평가: 2 - 악용 가능성 낮음
이전 소프트웨어 릴리스에 대한 악용 가능성 평가: 2 - 악용 가능성 낮음
추가 정보 https://portal.msrc.microsoft.com/ko-kr/security-guidance/advisory/CVE-2018-0886

 

 

CVE-2018-0872(영문) Chakra 스크립팅 엔진 메모리 손상 취약성
요약 Chakra 스크립팅 엔진이 Microsoft Edge에서 메모리의 개체를 처리하는 방식에 원격 코드 실행 취약성이 존재합니다. 이 취약성으로 인해 메모리가 손상되고 공격자가 현재 사용자의 컨텍스트에서 임의 코드를 실행할 수 있습니다.

이 취약성 악용에 성공한 공격자는 현재 사용자와 동일한 권한을 얻을 수 있습니다. 현재 사용자가 관리자 권한으로 로그온한 경우, 이 취약성 악용에 성공한 공격자는 영향받는 시스템을 제어할 수 있습니다. 이렇게 되면 공격자가 프로그램을 설치하거나, 데이터를 보거나 변경하거나 삭제하거나, 모든 사용자 권한이 있는 새 계정을 만들 수 있습니다.

이 보안 업데이트는 Chakra 스크립팅 엔진이 메모리의 개체를 처리하는 방식을 수정하여 취약성을 해결합니다.

공격 벡터 웹 기반 공격 시나리오에서 공격자는 Microsoft Edge를 통해 이 취약성을 악용하도록 설계하여 특수 제작된 웹 사이트를 호스팅한 다음 사용자가 이 웹 사이트를 보도록 유도할 수 있습니다. 공격자는 사용자가 제공한 콘텐츠 또는 광고를 허용하거나 호스트하는 웹 사이트와 공격에 노출된 웹 사이트를 이용할 수도 있습니다. 이러한 웹 사이트에 이 취약성을 악용하는 특수 제작된 콘텐츠가 포함될 수 있습니다.
완화 요소 공격자는 강제로 사용자가 웹 사이트를 방문하도록 할 수 없습니다. 대신 공격자는 일반적으로 전자 메일 또는 인스턴트 메시지에서 유인물을 이용하여 사용자가 이 링크를 클릭하도록 유도해야 합니다.

시스템에 대한 사용자 권한이 적게 구성된 계정의 사용자는 관리자 권한으로 작업하는 사용자보다 영향을 덜 받을 수 있습니다.

해결 방법 이 취약성에 대한 해결 방법을 확인하지 못했습니다.
영향받는 소프트웨어 Windows 10 및 Windows Server 2016의 Chakra Core 및 Edge
영향 원격 코드 실행
심각도 긴급
공개적으로 보고되는지 여부 아니요
알려진 악용 사례인지 여부 아니요
최신 소프트웨어 릴리스에 대한 악용 가능성 평가: 1 - 악용 가능성 높음
이전 소프트웨어 릴리스에 대한 악용 가능성 평가: 4 - 영향을 받지 않음
추가 정보 https://portal.msrc.microsoft.com/ko-kr/security-guidance/advisory/CVE-2018-0872

 

 

CVE-2018-0922(영문) Microsoft Office 메모리 손상 취약성
요약 Microsoft Office 소프트웨어가 메모리의 개체를 제대로 처리하지 못하는 경우 Office 소프트웨어에 원격 코드 실행 취약성이 존재합니다. 이러한 취약성 악용에 성공한 공격자는 현재 사용자의 컨텍스트에서 임의의 코드를 실행할 수 있습니다. 현재 사용자가 관리자 권한으로 로그온한 경우 공격자가 영향받는 시스템을 제어할 수 있습니다. 이렇게 되면 공격자가 프로그램을 설치하거나, 데이터를 보거나 변경하거나 삭제하거나, 모든 사용자 권한이 있는 새 계정을 만들 수 있습니다. 시스템에 대한 사용자 권한이 적게 구성된 계정의 사용자는 관리자 권한으로 작업하는 사용자보다 영향을 덜 받을 수 있습니다.

이 취약성을 악용하려면 사용자가 특수 제작된 파일을 영향받는 Microsoft Office 소프트웨어 버전에서 열어야 합니다.

이 보안 업데이트는 Office가 메모리의 개체를 처리하는 방식을 수정하여 취약성을 해결합니다.

공격 벡터 전자 메일 공격 시나리오에서 공격자는 특수 제작된 파일을 사용자에게 보내고 사용자가 이 파일을 열도록 유도하는 방식으로 이 취약성을 악용할 수 있습니다.

웹 기반 공격 시나리오에서 공격자는 이 취약성을 악용하도록 설계하여 특수 제작된 파일이 포함된 웹 사이트를 호스팅할 수 있습니다(또는 사용자가 제공한 콘텐츠를 허용하거나 호스팅하는 공격에 노출된 웹 사이트 이용).

미리 보기 창은 이 취약성에 대한 공격 벡터가 아닙니다.

완화 요소 공격자는 강제로 사용자가 웹 사이트를 방문하도록 할 수 없습니다. 대신 공격자는 일반적으로 전자 메일 또는 인스턴트 메시지에서 유인물을 이용하여 사용자가 링크를 클릭하도록 유도한 다음 특수 제작된 파일을 열도록 유도해야 합니다.

시스템에 대한 사용자 권한이 적게 구성된 계정의 사용자는 관리자 권한으로 작업하는 사용자보다 영향을 덜 받을 수 있습니다.

해결 방법 이 취약성에 대한 해결 방법을 확인하지 못했습니다.
영향받는 소프트웨어 Microsoft Office 2010, Office 호환 기능 팩, Office Online Server 2016, Office Web Apps 2010, Office Web Apps 2013, Office Word Viewer, SharePoint Enterprise Server 2013, SharePoint Server 2010, Word 2007, Word 2010, Word 2013, Word 2013 RT.
영향 원격 코드 실행
심각도 중요
공개적으로 보고되는지 여부 아니요
알려진 악용 사례인지 여부 아니요
최신 소프트웨어 릴리스에 대한 악용 가능성 평가: 4 - 영향을 받지 않음
이전 소프트웨어 릴리스에 대한 악용 가능성 평가: 2 - 악용 가능성 낮음
추가 정보 https://portal.msrc.microsoft.com/ko-kr/security-guidance/advisory/CVE-2018-0922

 

 

CVE-2018-0909(영문) Microsoft SharePoint 권한 상승 취약성
요약 SharePoint Server가 영향받는 Microsoft SharePoint Server에 대한 특수 제작된 웹 요청을 제대로 삭제하지 못하는 경우 권한 상승 취약성이 존재합니다. 이 보안 업데이트는 SharePoint Server에서 웹 요청을 제대로 삭제하도록 하여 취약성을 해결합니다.
공격 벡터 인증된 공격자는 영향받는 SharePoint 서버로 특수 제작된 요청을 보내 이 취약성을 악용할 수 있습니다.

이 취약성 악용에 성공한 공격자는 영향받는 시스템에서 교차 사이트 스크립팅 공격을 수행하고 현재 사용자의 보안 컨텍스트에서 스크립트를 실행할 수 있습니다.

이러한 공격을 통해 공격자는 읽도록 허가되지 않은 콘텐츠를 읽고, 희생자의 ID를 사용해서 희생자 대신 SharePoint 사이트에서 사용 권한 변경 및 콘텐츠 삭제와 같은 작업을 수행하고, 희생자의 브라우저에 악성 콘텐츠를 삽입할 수 있습니다.

완화 요소 이 취약성에 대한 완화 요소를 확인하지 못했습니다.
해결 방법 이 취약성에 대한 해결 방법을 확인하지 못했습니다.
영향받는 소프트웨어 Microsoft SharePoint Enterprise Server 2016 및 Microsoft Project Server 2013
영향 권한 상승
심각도 중요
공개적으로 보고되는지 여부 아니요
알려진 악용 사례인지 여부 아니요
최신 소프트웨어 릴리스에 대한 악용 가능성 평가: 2 - 악용 가능성 낮음
이전 소프트웨어 릴리스에 대한 악용 가능성 평가: 2 - 악용 가능성 낮음
추가 정보 https://portal.msrc.microsoft.com/ko-kr/security-guidance/advisory/CVE-2018-0909

 

 

CVE-2018-0940(영문) Microsoft Exchange 권한 상승 취약성
요약 Microsoft Exchange OWA(Outlook Web App)가 사용자에게 제공된 링크를 제대로 삭제하지 못하는 경우에 권한 상승 취약성이 존재합니다. 이 취약성 악용에 성공한 공격자는 OWA 인터페이스를 모조 로그인 페이지로 재정의하여 사용자가 중요한 정보를 유출하도록 속이려 할 수 있습니다.

이 보안 업데이트는 Microsoft Exchange가 전자 메일 본문 내에 제공된 링크를 다시 쓰는 방식을 수정하여 취약성을 해결합니다.

공격 벡터 이 취약성을 악용하기 위해 공격자는 악성 링크가 포함된 특수 제작된 전자 메일 메시지를 사용자에게 보낼 수 있습니다. 이 취약성에 노출되려면 사용자가 악성 링크를 클릭해야 합니다.
완화 요소 이 취약성에 대한 완화 요소를 확인하지 못했습니다.
해결 방법 이 취약성에 대한 해결 방법을 확인하지 못했습니다.
영향받는 소프트웨어 Microsoft Exchange Server 2010, Exchange Server 2013, Exchange Server 2016
영향 권한 상승
심각도 중요
공개적으로 보고되는지 여부
알려진 악용 사례인지 여부 아니요
최신 소프트웨어 릴리스에 대한 악용 가능성 평가: 3 - 악용 불가능
이전 소프트웨어 릴리스에 대한 악용 가능성 평가: 3 - 악용 불가능
추가 정보 https://portal.msrc.microsoft.com/ko-kr/security-guidance/advisory/CVE-2018-0940

 

정보의 일관성에 대하여

 

Microsoft는 정적(본 전자 메일) 및 동적(웹 기반) 콘텐츠에서 정확한 정보를 제공하기 위해 최선을 다하고 있습니다. 웹에 게시되는 Microsoft의 보안 콘텐츠는 최신 정보를 반영하기 위해 자주 업데이트됩니다. 이로 인해 본 알림의 정보와 Microsoft에서 게시한 웹 기반 보안 콘텐츠의 정보 간에 불일치 사항이 발생하는 경우 Microsoft에서 게시한 웹 기반 보안 콘텐츠의 정보가 더 신뢰할 수 있는 정보입니다

 

감사합니다.

 

 

Support Tip: Change in iOS passcode compliance may affect email access for some end users

$
0
0

You may have noticed that some of your users cannot access email after they have an iOS compliance policy assigned to them. One of the reasons could be because they may have deferred setting a PIN or a passcode after an iOS passcode policy is applied.

When users on an iOS device are targeted with a passcode compliance policy, they will be considered ‘not compliant’ until they set a PIN. Any company resources protected by Conditional Access policies requiring a compliant device will be blocked until the user makes their device compliant for the assigned policies. If they choose to defer setting a PIN, they will be prompted every 15 minutes to set it, until a PIN is set.

Note that devices that are locked after being marked not compliant will lose email access until they are unlocked and a PIN is entered. End users on these devices may experience a delay of a few minutes until their email is updated again.

We hope this helps! Let us know if you have any questions or feedback.


SAP on Azure クラウドワークショップ(パートナー様向け)のご案内【3/23 更新】

$
0
0

このたび、SAP on Azure パートナー様の技術者向けに協業ビジネスを一層推進すべく、下記の日程でクラウドワークショップを開催する運びとなりました。

グローバル ブラックベルト(GBB) アジア タイムゾーン SAP テクノロジーのメンバーによる1日のコースで、Azureインフラストラクチャ基礎からSAP をクラウド上に実装する上で必要となる知識、検討すべき制約と対応方法、SAP on Azureの具体的なアーキテクチャまで、深くご理解頂きSAPのクラウド化プロジェクトを進めるために必要なスキルを習得いただきます。

ご参加につきましては、SAPアーキテクチャについてのスキルを有しかつ、当日すべてのセッションへのご参加をお願い申し上げます。
※すべて日本語でのセッションです

また全国各地で開催を予定しておりますので、東京以外のパートナー様もぜひご参加を検討いただければ幸いです。

 

日 程 :2018/02/26 ~ 2018/06/21 (会場およびお申し込み先は下のリストをご参照ください)

標準アジェンダ

 

参加希望の方は、以下の会場のリンクより参加登録をお願いいたします。定員に達し次第締め切りますので、お早めにお申し込みください!

開催日__ 開催地 会場 (お申込みリンク)
4/11(水) 東京 マイクロソフト品川本社
4/16(月) 福岡 TKP博多駅前シティセンター
5/10(木) 東京 マイクロソフト品川本社
5/25(金) 名古屋_ TKPガーデンシティPREMIUM名古屋新幹線口_
6/6(水) 東京 マイクロソフト品川本社
6/21(木) 大阪 TKPガーデンシティPREMIUM大阪駅前

 

 

Easy Configuration of the Azure Information Protection Scanner

$
0
0

The Scenario:

The EU General Data Protection Regulation (GDPR) is taking effect on May 25, 2018 and marks a significant change to the regulatory landscape of data privacy.  ​The aim of the GDPR is to protect all EU citizens from privacy and data breaches in an increasingly data-driven world.  Organizations in breach of GDPR can be fined up to 4% of annual global turnover or €20 Million (whichever is greater).  Needless to say, this has motivated organizations worldwide to better classify and protect sensitive personal data to protect against breach.  One of the ways to accomplish this is to protect everything sensitive using Azure Information Protection.

Azure Information Protection allows data workers to classify and optionally protect documents as they are created.  There are also options for automatically classifying/protecting emails as they are sent through your Exchange server or Exchange Online, and SharePoint Online can be protected using Microsoft Cloud App Security AIP integration.  These options go a long way to protect newly created data and data migrated to the cloud, but what about the terabytes of data sitting on File Shares and On-Premises SharePoint 2013/2016 servers? That is where the AIP Scanner comes in.

The Solution:

The Azure Information Protection Scanner is the solution for classifying and protecting documents stored on File Shares and On-Premises SharePoint servers. The overview below is from the official documentation at https://docs.microsoft.com/en-us/information-protection/deploy-use/deploy-aip-scanner.  This blog post is meant to assist customers with deploying the AIP Scanner, but if there is ever a conflict, the official documentation is authoritative.

Azure Information Protection scanner overview

The AIP Scanner runs as a service on Windows Server and lets you discover, classify, and protect files on the following data stores:

  • Local folders on the Windows Server computer that runs the scanner.
  • UNC paths for network shares that use the Common Internet File System (CIFS) protocol.
  • Sites and libraries for SharePoint Server 2016 and SharePoint Server 2013.

The scanner can inspect any files that Windows can index, by using iFilters that are installed on the computer. Then, to determine if the files need labeling, the scanner uses the Office 365 built-in data loss prevention (DLP) sensitivity information types and pattern detection, or Office 365 regex patterns. Because the scanner uses the Azure Information Protection client, it can classify and protect the same file types.

You can run the scanner in discovery mode only, where you use the reports to check what would happen if the files were labeled. Or, you can run the scanner to automatically apply the labels.

Note that the scanner does not discover and label in real time. It systematically crawls through files on data stores that you specify, and you can configure this cycle to run once, or repeatedly.

Prerequisites:

To install the AIP Scanner in a production environment, the following items are needed:

  • A Windows Server 2012 R2 or 2016 Server to run the service
    • Minimum 4 CPU and 4GB RAM physical or virtual
    • Internet connectivity necessary for Azure Information Protection
  • A SQL Server 2012+ local or remote instance (Any version from Express or better is supported)
    • Sysadmin role needed to install scanner service
  • Service account created in On Premises AD and synchronized with Azure AD (I will call this account AIPScanner in this document)
    • Service requires Log on locally right and Log on as a service right (the second will be given during scanner service install)
    • Service account requires Read permissions to each repository for discovery and Read/Write permissions for classification/protection
  • AzInfoProtectionScanner.exe available on the Microsoft Download Center (future versions will be included in the AIP client)
  • Labels configured for Automatic Classification/Protection
    • NOTE: This is an AIP Premium P2/EMS E5 feature 
    • https://docs.microsoft.com/en-us/information-protection/deploy-use/configure-policy-classification

Installation:

Here is where the Easy part from the title gets started.  Installation of the AIP Scanner service is incredibly simple and straight-forward.

  1. Log onto the server where you will install the AIP Scanner service using an account that is a local administrator of the server and has permission to write to the SQL Server master database.
  2. Right-click on the Windows Windows button in the lower left-hand corner and click on Command Prompt (Admin)
    Start Menu
  3. Type PowerShell and hit Enter
    PowerShell
  4. At the PowerShell prompt, type the following command and press Enter:
    Install-AIPScanner
  5. When prompted, provide the credentials for the scanner service account (YourDomainAIPScanner) and password
  6. When prompted for SqlServerInstance, enter the name of your SQL Server and press Enter
    You should see a success message like the one below
    Message
  7. Right-click on the Windows Windows button in the lower left-hand corner and click on Run
    run
  8. In the Run dialog, type services.msc and click OK
    Services
  9. In the Services console, double-click on the Azure Information Protection Scanner service
  10. On the Log On tab of the Azure Information Protection Scanner Service Properties, verify that Log on as: is set to the YourDomainAIPScanner service account
    logon

See, told you it was easy to install.  Luckily, configuring the service is only slightly more challenging. 🙂

Scanner Configuration:

OK, this next part is not super simple but it isn't terrible either as long as you don't miss anything.  Luckily, you can follow my steps to make it as easy as possible.

Authentication Token:

  1. On the server where you installed the scanner, create a new text document on the desktop and name it Set-AIPAuthentication.txt
    • In this document, paste the line of PowerShell code below and saveSet-AIPAuthentication -webAppId <ID of the "Web app / API" application> -webAppKey <key value generated in the "Web app / API" application> -nativeAppId <ID of the "Native" application >
  2. Open Internet Explorer and browse to https://portal.azure.com
  3. At the Sign in to Microsoft Azure page, enter the your tenant admin credentials
  4. In the Microsoft Azure portal, click on Azure Active Directory in the left-hand pane
  5. Under Manage, click on App registrations

  6. In the App registrations blade, click the + New application registration button
  7. In the Create blade, use the values in the table below to create the registration
    Name AIPOnBehalfOf
    Application type Web app / API
    Sign-on URL http://localhost

  8. Click the Create button to complete the app registration
  9. Select the AIPOnBehalfOf application
  10. In the AIPOnBehalfOf blade, hover the mouse over the Application ID and click on the Click to copy icon when it appears
  11. Minimize (DO NOT CLOSE) Internet Explorer and other windows to show the desktop
  12. On the desktop, return to Set-AIPAuthentication.txt and replace <ID of the "Web app / API" application> with the copied Application ID valueWARNING: Ensure there is only a single space after the Application ID before -webAppKey
  13. Return to the browser and click on the Settings button
  14. In the Settings blade, under API ACCESS, click on Keys

  15. In the Keys blade, add a new key by typing AIPClient in the Key description field and your choice of duration (1 year, 2 years, or never expires)
  16. Select Save and copy the Value that is displayedWARNING: Do not dismiss this screen until you have saved the value as you cannot retrieve it later
  17. Go back to the txt document and replace <key value generated in the "Web app / API" application> with the copied key valueWARNING: Ensure there is only a single space after the Application Key before -nativeAppId
  18. Repeat steps 6-10 to create a Native Application using the values in the table below
    Name AIPClient
    Application type Native Application
    Sign-on URL http://localhost

  19. Replace <ID of the "Native" application > in the txt document with the copied Application ID value

  20. Return to the browser and in the AIPClient blade, click on Settings
  21. In the Settings blade, under API ACCESS, select Required permissions

  22. On the Required permissions blade, click Add, and then click Select an API

  23. In the search box, type AIPO and click on AIPOnBehalfOf, and then click the Select button
  24. On the Enable Access blade, check the box next to AIPOnBehalfOf, click the Select button
  25. Click Done

  26. Return to the PowerShell window and paste the completed command from Set-AIPAuthentication.txt and press Enter
  27. When prompted, enter the user AIPScanner@yourdomain.onmicrosoft.com and the passwordNOTE: Replace tenantname with the your tenant

  28. You should see a prompt like the one below. Click Accept

  29. You will see the message below in the PowerShell window once complete

Configuring Repositories:

Now that the scanner is happy and fully authenticated, it is time to put it to work scanning repositories.  These can be on-premises SharePoint 2013 or 2016 document libraries or lists and any accessible CIFS based share.  Keep in mind that in order to do discovery, classification, and protection, the scanner service pulls the documents to the server, so having the scanner server located in the same LAN as your repositories is recommended. You can deploy as many servers as you like in your domain, so putting on at each major site is probably a good idea.

  1. To add a file share repository, open a PowerShell window and run the command below
    Add-AIPScannerRepository -Path \fileserverdocuments
  2. To add a SharePoint 2013/2016 document library run the command below
    Add-AIPScannerRepository -Path http://sharepoint/documents
  3. To verify that the repositories that are configured, run the command below
    Get-AIPScannerRepository
  4. Run the command below to run an initial discovery cycle
    Set-AIPScannerConfiguration -Schedule OneTime 
    NOTE: Although the scanner will discover documents to protect, it will not protect them as the default configuration for the scanner is Discover only mode
  5. Start the AIP Scanner service using the command below
    Start-Service AIPScanner
  6. Right-click on the Windows Windows button in the lower left-hand corner and click on Event Viewer

  7. Expand Application and Services Logs and click on Azure Information Protection

  8. You will see an event like the one below when the scanner completes the cycle

    NOTE: You may also browse to %localappdata%MicrosoftMSIPScannerReports and review the summary txt and detailed csv files available there
  9. At the PowerShell prompt type the command below to enforce protection and have the scanner run once
    Set-AIPScannerConfiguration -ScanMode Enforce -Schedule OneTime -Type Full
    NOTE: After testing, you would use the same command with the -Schedule Continuous command to have the AIP Scanner run continuously
    NOTE: The -Type Full switch forces the scanner to review every document. 
  10. Start the AIP Scanner service using the PowerShell command below
    Start-Service AIPScanner
  11. In the Event Log, you will now see an event that looks like the one below

And that's all there is to setting up the AIP Scanner! There are many more options to consider about how to classify files and what repositories you want to configure, but I would say that it is fairly simple to set up a basic scanner server that can be used to protect a large amount of data easily.  I highly recommend reading the official documentation on deploying the scanner as there are some less common caveats that I have left out and they cover performance tips and other nice additional information.

I hope this was helpful. Please let me know if I missed anything or if anything is not clear in the comments below.

Kevin

Рейтинги отвечающих на форумах TechNet в феврале

$
0
0

Команда инженеров, сопровождающая русскоязычные форумы TechNet, наконец-то реанимировала утилиту сбора статистики. Надеемся теперь выкладывать отчёты о лучших отвечающих каждый месяц.

За Февраль 2018 рейтинг лучших отвечающих выглядит таким образом:

1    Vector BCO 
2    Dmitriy Razbornov 
3    Ilya Tumanov 
4    M.V.V. _ 
5    Антонов Антон 
6    Mikhail Efimov 
7    Ivan.Basov 
8    Sergey Ya 
9    MSBuy.ru 
10    Kaplin Vladimir 
11    Alexander Surbashev 
12    Artem S. Smirnov 
13    Alexey Klimenko 
14    Denis Dyagilev 
15    Svolotch

 

投機的実行に関する報奨金プログラムの開始

$
0
0

本記事は、Microsoft Security Response Center のブログSpeculative Execution Bounty Launch” (2016 3 14 日 米国時間公開) を翻訳したものです。


本日、マイクロソフトは投機的実行のサイドチャネルの脆弱性に関する期間限定の報奨金プログラムの開始を発表します。この新しい脆弱性のクラスは 2018 1 月に公開され、この分野における研究の大きな進歩を象徴するものとなりました。その脅威環境の変化を受けて、新しい脆弱性のクラスの研究と、この課題のクラスを軽減するためにマイクロソフトが発表した緩和策を推進するために、報奨金プログラムを開始します。

概要:

レベル 報奨金の範囲 (米国ドル)
レベル 1: 投機的実行の攻撃で新しいカテゴリとなるもの 最高で $250,000
レベル 2: Azure の投機的実行の緩和策バイパス 最高で $200,000
レベル 3: Windows の投機的実行の緩和策バイパス 最高で $200,000
レベル 4: Windows 10 または Microsoft Edge における既知の投機的実行の脆弱性(CVE-2017-5753など) のインスタンス。この脆弱性は、信頼の境界を越えた機密情報の漏えいを可能にするものでなければならない。 最高で $25,000

 

投機的実行は真に新しい脆弱性のクラスであり、新規の攻撃手法に関する研究は既に進められています。この報奨金プログラムは、そのような研究とこれらの課題に関連する協調的な脆弱性の公開を発展させる 1 つの方法になることを意図しています。レベル 1 は、投機的実行のサイドチャネルに関与する新しい攻撃のカテゴリに注目します。現在、業界内で既知の内容については、Security Research & Defense チームがブログ (英語情報) で追加情報を発表しています。レベル 2 および 3 は、すでに識別されている攻撃から防御するために Windows Azure に追加された緩和策のバイパスが対象です。レベル 4 は、CVE-2017-5753 または CVE-2017-5715 を悪用できるインスタンスで、存在する可能性のあるものを取り扱います。

投機的実行のサイドチャネルの脆弱性には、業界としての対応が必要です。そのため、影響を受けた関係者がこれらの脆弱性に関するソリューションにおいて協業できるよう、マイクロソフトは本プログラム下で公開された研究を協調的な脆弱性の公開の原則のもとで共有します。私たちはセキュリティ研究者とともに、お客様の環境の安全性をさらに高めていきます。

 

Phillip Misner Principal Security Group Manager Microsoft Security Response Center

 

■ご報告時の注意点

マイクロソフトの報奨金プログラムへご参加される場合は、脆弱性報告はすべて、報奨金プログラムのガイドラインに沿って米国 secure@microsoft.com へ直接ご報告いただく必要があります。この際、英語でのご報告が困難な場合は日本語の併記・記載でも構いません。これは、報奨金受賞者選定において、公平性の観点で重要となります。皆様のご参加をお待ちしています!

Windows Defender オフラインご利用時の注意について

$
0
0

こんにちは、日本マイクロソフト セキュリティプロダクト サポート担当の若狭です。

本日は Windows Defender オフライン を USB ブートでご利用いただく際の注意事項についてご案内いたします。

Windows Defender オフラインは、通常のマルウェア検出に加え Windows Defender の実行で対処できない、ルートキット等のマルウェアに対する脅威の検出と対処を行うことができる機能です。

この機能は、Windows 10 の場合には Windows Defender のユーザーインターフェースより、その他の OS の場合には光学ディスクや USB メモリー等にイメージを書き込み、そこからブートしてご利用いただくことができます。

ただし、既知の不具合がございますため、誠にご不便をおかけいたしますが、ご利用に際しては次の注意事項をご確認の上ご利用くださいますようお願いいたします。

 

==========================
- 注意事項
==========================

USB ブートでご利用いただく際、正常にスキャンを開始することができないことがある、という既知の不具合がございます。

この事象が発生すると、Windows Defender オフラインがブートされている USB ドライブ上のマルウェア定義ファイルを読み込むことができず、以下のような画面が表示されます。

これを回避するためには、事前にスキャン対象のオペレーティングシステムの C ドライブに対して、USB メモリから最新の定義ファイルをコピーします。

定義ファイルは、Windows Defender オフラインが展開された USB メモリのルートフォルダにファイル名 "mpam-*.exe" として保存されています。

この実行ファイルを対象マシンのローカルディスクのルートディレクトリにコピーすることで、Windows Defender オフラインは正常に定義ファイルを見つけることができるようになります。

 

この対処策を実施いただく場合、USB メモリ内の定義ファイルを更新されました際には必ず、ローカルディスクにコピーされた実行ファイルを新しいものに置き換えるようにします。

置き換えずに Windows Defender オフラインのスキャンを実行した場合、以前マシンにコピーされた古い定義ファイルのみが利用され、最新のマルウェアに対応できない可能性がございます。

現在、この動作は修正が予定されておりますが、実際に修正されたモジュールのリリース時期は現時点では未定です。

お手数をおかけいたしますが、当面は上記対処策を実施ください。

 

※Windows Defender オフラインのご利用方法の詳細は以下の参考情報をご参照ください。

<Windows Defender オフラインを使って PC を保護する>
https://support.microsoft.com/ja-jp/help/17466/windows-defender-offline-help-protect-my-pc

 

なお、本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

何卒ご留意いただけますようお願い申し上げます。

Microsoft Office 365 в образовании. Организуем обучение с помощью Microsoft Sway. Примеры

$
0
0

Автор статьи - Виталий Веденев.

Продолжаю разбирать, как с помощью Sway создать электронный учебный курс (ЭУК) [1], как организовать обучение с помощью Sway [2] на конкретных примерах.

Что вы будете знать и уметь после прочтения этой статьи?

  • Как экспортировать электронный учебник, созданный в Sway, в pdf и word – файлы?
  • Как создать электронный учебник в Sway из файлов pdf и word?
  • Как создать электронный учебник для универсального использования, включая мобильные устройства?

Активное использование облачных служб позволяет подобрать различные источники для предоставления своих материалов.

Sway – современный сервис, который использует блочный дизайн Microsoft, оптимизированный под сенсорный ввод и управление, поддерживает перетаскивание элементов.

Sway автоматически адаптирует учебные материалы под мобильные устройства и является универсальным средством для организации образовательной среды.

Сценарий 1. Сохранение Sway в форматы PDF и Word

Достаточно часто возникает необходимость представления Sway-учебника в виде твердой копии на основе, например, pdf или word-файлов или сохранения содержимого учебника в виде файла в библиотеках OneDrive, SharePoint или на локальных устройствах для автономного обучения [3].

Для этого надо воспользоваться возможностью экспорта содержимого учебника в форматы «Word» или «PDF». Порядок экспорта и представление содержимого в разных форматах можно просмотреть в видеоролике «Электронные учебники Sway. Экспорт в pdf и word» (https://youtu.be/Jq51uL2FDrQ ).

На схеме отображена последовательность экспорта содержимого конкретного учебника Sway в Word и PDF форматы. Видео в формате Word преобразуется аналогично Sway в виде встроенного объекта. Более подробно о порядке создания Sway из PDF рассмотрено в сценарии 2.

Порядок создания Sway из сохраненных PDF и Word форматов рассмотрен в видеоролике «Создание электронного учебника в Sway из pdf или word». (https://youtu.be/qtMpzWSGKwA)

Сценарий 2. Создание электронного учебника из PDF

Рассмотрим, как создать электронный учебник из методической инструкции в формате PDF с добавлением мультимедиа: видео, опросов и тестов Microsoft Forms.

На домашней странице Sway воспользуемся кнопкой в верхней части окна «Начать с документа», выберем на локальном устройстве нужный для публикации в Sway файл PDF (в примере – это методическая инструкция преподавателю).

Инструкция автоматически преобразуется (подробности см. в видеоролике «Порядок создания электронного учебника из PDF-файла» (https://youtu.be/tQmw8vWIFjg ) в Sway. Далее необходимо внести некоторые изменения:

  • Отредактировать название учебника.
  • Проверить структуру и расположение изображений, полученных из PDF-документа.
  • Добавить «Мультимедиа» в нужных местах текста для более наглядного отображения учебного материала:
    • Для этого необходимо выбрать «Видео» в панели «Мультимедиа» (панель появляется после нажатия знака «+» в режиме редактирования).
    • В примере видео выбрано с общедоступного канала «Microsoft Office 365 в образовании» путем «Поиск в источниках».
    • Далее выбираем нужные видеоролики, формируем текст подписи и «задаем среднее выделение на этой карточке».
    • Нажимаем «Воспроизвести» и при необходимости после просмотра редактируем повторно все элементы.
  • В конце каждого раздела учебника добавляем опрос (или тест) Microsoft Forms. В конце учебника добавляем итоговый контрольный тест.
    • Для этого в панели «Мультимедиа» нажимаем «Внедренный объект» и
    • В карточке добавляем код внедрения Microsoft Forms.
    • Код внедрения необходимо скопировать непосредственно на странице опроса (теста) Forms в «Поделиться» - «Внедрить» [4].
  • В электронный учебник могут быть добавлены ссылки на модули с методическими материалами, размещенными в OneDrive для бизнеса [1], которые позволят размещать более компактно текст в Sway-учебнике и добавлять больше мультимедиа-компонентов.
  • В Sway могут быть задействованы постоянно дополняемые и обновляемые каналы видео Stream Office 365, как учебного заведения в целом, так и отдельных групп обучаемых. Это позволяет вносить минимальные изменения в Sway-учебник при обновлении учебного видео или постоянно дополнять канал видео какими-либо материалами без изменения учебника [1].

Сценарий 3. Отображение Sway на мобильном устройстве и мобильное обучение

Для мобильной работы с учебниками необходимо «Поделиться» ссылкой на конкретный учебник. При формировании ссылки можно затребовать пароль для просмотра или редактирования Sway, поэтому при переходе по ссылке, возможно, потребуется вводить пароль Office 365 учебного заведения. Данные опросов и тестов будут зафиксированы в Forms под вашим именем.

В ходе изучения Вам доступен просмотр видеороликов и прочие учебные материалы, которые размещены в электронном учебнике.

Использованные источники:

  1. Microsoft Office 365 в образовании. Содержание образовательных программ и Microsoft Sway https://vedenev.livejournal.com/19936.html
  2. Microsoft Office 365 в образовании. Организуем обучение с помощью Microsoft Sway https://blogs.technet.microsoft.com/tasush/2017/02/28/organizuem-obuchenie-s-pomoshhju-microsoft-sway/
  3. Microsoft Office 365 в образовании. Автономное обучение в Office 365 http://blogs.technet.com/b/tasush/archive/2016/02/12/avtonomnoe-obuchenie-v-office-365.aspx
  4. Microsoft Office 365 в образовании. Организация контроля знаний в Office 365 с помощью Microsoft Forms https://blogs.technet.microsoft.com/tasush/2016/06/10/organizacija-kontrolja-znanij-v-office-365-s-pomoshhju-microsoft-forms/

Олимпиада по управлению проектами среди студентов

$
0
0

АНО «Центр оценки и развития проектного управления» приглашает студентов 3-4 курсов специальностей в области управления проектами ВУЗов России и стран СНГ принять участие в Студенческой олимпиаде по управлению проектами.

Олимпиада — это возможность для студентов проверить свои знания и навыки в области проектного управления. Олимпиада пройдет в два этапа:

  1. Отборочный этап проводится заочно в онлайн формате, участники проходят компьютерное тестирование, которое проверяет уровень базовых знаний участников по проектному менеджменту.
  2. Второй очный этап пройдет в формате деловой игры по управлению проектами в Москве 24 апреля 2018 года.

Для участия необходимо сформировать команду из 3 человек. Заявки принимаются до 25 марта 2018 года. Подробности по тел. 8-929-005-44-48, cert@isopm.ru.   


Internet Explorer および Microsoft Edge での Flash の今後の対応予定について

$
0
0

こんにちは。
今日はたまにお問い合わせとしていただく Internet Explorer および Microsoft Edge での Flash の今後の対応についてご紹介します。

 

Adobe Flash のサポート終了のロードマップは、下記の資料に記載の予定のとおりです。
本記事では、下記の資料からポイントとなる点について抽出しています。

The End of an Era – Next Steps for Adobe Flash
https://blogs.windows.com/msedgedev/2017/07/25/flash-on-windows-timeline/

 

■ 2017 年末から 2018 年にかけて
Windows 10 Creators Update (v1703) 以降の Microsoft Edge では、初めて訪れる Web サイトでの Flash コンテンツの実行の許可を求め、許可した Web サイトの再訪問時には求められません。
また、Internet Explorer では Flash の実行に関しては特別な制御は行われておりません。

 

■ 2018 年後半にかけて
Microsoft Edge 上で Flash が含まれる Web サイトを閲覧するたびに実行の許可を求める動作となります。
Internet Explorer においては引き続き Flash の実行は許可され、特別な制御は行われません。

 

■ 2019 年後半にかけて
Microsoft Edge および Internet Explorer 上での Flash が既定で無効となります。
ただし、Flash を実行できるよう構成を変更することも可能です。
Flash を実行できるよう構成したい場合、Microsoft Edge については、[2018 年後半にかけて] と同様に、Flash が含まれる Web サイトを閲覧するたびに実行の許可が求められる動作となります。

 

■ 2020 年末
サポートされるすべての Windows 上の Microsoft Edge および Internet Explorer で Flash を実行することができなくなります。
Flash を再び実行できるように構成することもできなくなる予定です。

 

本日の記事は以上となります。
本情報はあくまでも現時点での予定となりますため、今後何らかの影響により本予定も変更となる可能性が十分にありえます。
そのため、Web サイト側での対応を計画されている場合には、十分に余裕をもった計画とされることをお勧めいたします。

 

Quick Tip – Download .NET Framework 4.5 Offline Installer

$
0
0

This post will attempt to resolve some download frustration if your are looking for an older version of the .NET Framework.  This is an issue with Exchange 2010 as the support position for that platform has not been update .NET Framework support like Exchange 2013 and 2016 since Exchange 2010 is in extended support and is almost at the end of its support lifecycle.

As always check the supported version information in the Exchange Support Matrix Article before updating .NET on an Exchange server.

A separate instance where an older .NET Framework may be needed is for the Azure AD Module.  Currently Azure AD version 2 is being worked on though many customers still leverage the 1.* version of the module.  Depending on the environment you may run into the issue which is described here Azure AD Module – This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded.

Below are the recent locations used to download the various versions of .NET.  Note that if the download is retired from the Download Centre please do not ask me for a copy.  I am blocked from distributing software and you will need to create a support case to explore the options available at that time.

.NET Framework Versions and Dependencies lists the details for .NET versions.

Personally I never browse the Internet or do any downloads on servers.  This is why my preference is to use the offline/standalone installer when possible.

.NET Framework 4.0 Standalone Installer

Download .NET Framework 4.0 Standalone Installer

.NET Framework 4.5 Standalone Installer

Download .NET Framework 4.5 Standalone Installer

.NET Framework 4.5.1 Standalone Installer

Download .NET Framework 4.5.1 Standalone Installer

.NET Framework 4.5.2 Standalone Installer

image

Cheers,

Rhoderick

Microsoft Premier Workshop: System Center Operations Manager: Configuration and Administration

$
0
0

Beschreibung
Der dreitägige Workshop vermittelt neuen Administratoren die Grundkonzepte des Microsoft System Center 2016 Operations Manager. Dies geschieht über eine Kombination aus Vorträgen und praktischen Übungen.

Agenda
Module 1: Architecture Overview.
This module provides an overall introduction to Operations Manager architecture and its general features.

Module 2: Basic Concepts.
This module covers the basic concepts and terminology used within Operations Manager

Module 3: Navigating the console.
This module covers the basic concepts and terminology used within Operations Manager console to enable full access to console functionality

Module 4: Management Pack Tuning.
This module will guide you through all the steps needed to tune management packs for a better monitoring experience.

Module 5: Maintenance Mode and Schedules.
This module will explain one of the new features of Operations Manager 2016 and how to integrate for better operations on your hybrid cloud

Module 6: Notifications.
This module will explain how you can configure and use notifications within Operations Manager.

Module 7: RBAC.
This module will illustrate how implement and maintain the Role Based Authentication in Operations Manager environment.

Module 8: Maintaining Operations Manager.
This module will explain steps needed to maintain an healthy monitoring environment.

Module 9: Authoring.
This module will guide on the powerful authoring features that enable endless monitoring capabilities of Operations Manager.

Module 10: Visualization.
This module will guide you to all the way you can visualize data that Operations Manager collect.

Module 11: Linux Monitoring.
Linux and Unix are integral part of the monitoring capabilities of Operations Manager. This module will show you how to integrate seamlessly these platforms into your monitoring infrastructure

Zielgruppe
Die Teilnehmer sollten folgende Fähigkeiten mitbringen:
• Erfahrung mit Standard-Computing-Systemen wie Dateiablage, Netzwerke und Internet-Technologien
• Allgemeine Kenntnis der Kerntechnologien von Microsoft

Level 300
(Level Skala: 100= Strategisch/ 200= technischer Überblick/ 300=tiefe Fachkenntnisse/  400= technisches Expertenwissen)

Anmeldung
Zur Anmeldung wenden Sie sich bitte direkt an Ihren Microsoft Technical Account Manager oder besuchen Sie uns im Web auf Microsoft Premier Education. Dort finden Sie eine Gesamtübersicht aller offenen Workshops, für die Sie sich dort auch gleich anmelden können.

SharePoint Conference North America has it all, and MORE!

$
0
0

SharePoint Conference North America has it all, and MORE!

Get more by registering NOW! http://tiny.cc/SPCNA_REG

 

There are 4 main reasons why people attend technical conferences and the SharePoint Conference North America (SPCNA) has all of them, and MORE!

  1. With the constantly changing world of technology, people need to know what's new before the competition does. SPCNA has the sessions and workshops to keep you ahead of the curve.
  2. Learning with the best of the best from Microsoft and top industry thought leaders from engineering and marketing. Attendees want to hear the practical solutions from the people who actually designed, built and integrated today's current technologies. SPCNA has the best speakers.
  3. Network and connect with peers and business technology gurus with an opportunity to share, collaborate and understanding the creation of real world solutions.
  4. Location! The host hotel is the world renowned MGM Grand. When you aren't engaged with sessions, receptions and parties, there is an endless line-up of shows, restaurants and activities for every taste.

BONUS, When you register for one of our workshop packages, take home an Xbox One X, an Xbox One S or an Invoke by Harman Kardon, FREE.

It's pretty simple, SPCNA has it all. WE want you to BE THERE!

KVA Shadow: Mitigating Meltdown on Windows

$
0
0

On January 3rd, 2018, Microsoft released an advisory and security updates that relate to a new class of discovered hardware vulnerabilities, termed speculative execution side channels, that affect the design methodology and implementation decisions behind many modern microprocessors. This post dives into the technical details of Kernel Virtual Address (KVA) Shadow which is the Windows kernel mitigation for one specific speculative execution side channel: the rogue data cache load vulnerability (CVE-2017-5754, also known as “Meltdown” or “Variant 3”). KVA Shadow is one of the mitigations that is in scope for Microsoft's recently announced Speculative Execution Side Channel bounty program.

It’s important to note that there are several different types of issues that fall under the category of speculative execution side channels, and that different mitigations are required for each type of issue. Additional information about the mitigations that Microsoft has developed for other speculative execution side channel vulnerabilities (“Spectre”), as well as additional background information on this class of issue, can be found here.

Please note that the information in this post is current as of the date of this post.

Vulnerability description & background

The rogue data cache load hardware vulnerability relates to how certain processors handle permission checks for virtual memory. Processors commonly implement a mechanism to mark virtual memory pages as owned by the kernel (sometimes termed supervisor), or as owned by user mode. While executing in user mode, the processor prevents accesses to privileged kernel data structures by way of raising a fault (or exception) when an attempt is made to access a privileged, kernel-owned page. This protection of kernel-owned pages from direct user mode access is a key component of privilege separation between kernel and user mode code.

Certain processors capable of speculative out-of-order execution, including many currently in-market processors from Intel, and some ARM-based processors, are susceptible to a speculative side channel that is exposed when an access to a page incurs a permission fault. On these processors, an instruction that performs an access to memory that incurs a permission fault will not update the architectural state of the machine. However, these processors may, under certain circumstances, still permit a faulting internal memory load µop (micro-operation) to forward the result of the load to subsequent, dependent µops. These processors can be said to defer handling of permission faults to instruction retirement time.

Out of order processors are obligated to “roll back” the architecturally-visible effects of speculative execution down paths that are proven to have never been reachable during in-program-order execution, and as such, any µops that consume the result of a faulting load are ultimately cancelled and rolled back by the processor once the faulting load instruction retires. However, these dependent µops may still have issued subsequent cache loads based on the (faulting) privileged memory load, or otherwise may have left additional traces of their execution in the processor’s caches. This creates a speculative side channel: the remnants of cancelled, speculative µops that operated on the data returned by a load incurring a permission fault may be detectable through disturbances to the processor cache, and this may enable an attacker to infer the contents of privileged kernel memory that they would not otherwise have access to. In effect, this enables an unprivileged user mode process to disclose the contents of privileged kernel mode memory.

Operating system implications

Most operating systems, including Windows, rely on per-page user/kernel ownership permissions as a cornerstone of enforcing privilege separation between kernel mode and user mode. A speculative side channel that enables unprivileged user mode code to infer the contents of privileged kernel memory is problematic given that sensitive information may exist in the kernel’s address space. Mitigating this vulnerability on affected, in-market hardware is especially challenging, as user/kernel ownership page permissions must be assumed to no longer prevent the disclosure (i.e., reading) of kernel memory contents from user mode. Thus, on vulnerable processors, the rogue data cache load vulnerability impacts the primary tool that modern operating system kernels use to protect themselves from privileged kernel memory disclosure by untrusted user mode applications.

In order to protect kernel memory contents from disclosure on affected processors, it is thus necessary to go back to the drawing board with how the kernel isolates its memory contents from user mode. With the user/kernel ownership permission no longer effectively safeguarding against memory reads, the only other broadly-available mechanism to prevent disclosure of privileged kernel memory contents is to entirely remove all privileged kernel memory from the processor’s virtual address space while executing user mode code.

This, however, is problematic, in that applications frequently make system service calls to request that the kernel perform operations on their behalf (such as opening or reading a file on disk). These system service calls, as well as other critical kernel functions such as interrupt processing, can only be performed if their requisite, privileged code and data are mapped in to the processor’s address space. This presents a conundrum: in order to meet the security requirements of kernel privilege separation from user mode, no privileged kernel memory may be mapped into the processor’s address space, and yet in order to reasonably handle any system service call requests from user mode applications to the kernel, this same privileged kernel memory must be quickly accessible for the kernel itself to function.

The solution to this quandary is to, on transitions between kernel mode and user mode, also switch the processor’s address space between a kernel address space (which maps the entire user and kernel address space), and a shadow user address space (which maps the entire user memory contents of a process, but only a minimal subset of kernel mode transition code and data pages needed to switch into and out of the kernel address space). The select set of privileged kernel code and data transition pages handling the details of these address space switches, which are “shadowed” into the user address space are “safe” in that they do not contain any privileged data that would be harmful to the system if disclosed to an untrusted user mode application. In the Windows kernel, the usage of this disjoint set of shadow address spaces for user and kernel modes is called “kernel virtual address shadowing”, or KVA shadow, for short.

In order to support this concept, each process may now have up to two address spaces: the kernel address space and the user address space. As there is no virtual memory mapping for other, potentially sensitive privileged kernel data when untrusted user mode code executes, the rogue data cache load speculative side channel is completely mitigated. This approach is not, however, without substantial complexity and performance implications, as will later be discussed.

On a historical note, some operating systems previously have implemented similar mechanisms for a variety of different and unrelated reasons: For example, in 2003 (prior to the common introduction of 64-bit processors in most broadly-available consumer hardware), with the intention of addressing larger amounts of virtual memory on 32-bit systems, optional support was added to the 32-bit x86 Linux kernel in order to provide a 4GB virtual address space to user mode, and a separate 4GB address space to the kernel, requiring address space switches on each user/kernel transition. More recently, a similar approach, termed KAISER, has been advocated to mitigate information leakage about the kernel virtual address space layout due to processor side channels. This is distinct from the rogue data cache load speculative side channel issue, in that no kernel memory contents, as opposed to address space layout information, were at the time considered to be at risk prior to the discovery of speculative side channels.

KVA shadow implementation in the Windows kernel

While the design requirements of KVA shadow may seem relatively innocuous, (privileged kernel-mode memory must not be mapped in to the address space when untrusted user mode code runs) the implications of these requirements are far-reaching throughout Windows kernel architecture. This touches a substantial number of core facilities for the kernel, such as memory management, trap and exception dispatching, and more. The situation is further complicated by a requirement that the same kernel code and binaries must be able to run with and without KVA shadow enabled. Performance of the system in both configurations must be maximized, while simultaneously attempting to keep the scope of the changes required for KVA shadow as contained as possible. This maximizes maintainability of code in both KVA shadow and non-KVA-shadow configurations.

This section focuses primarily on the implications of KVA shadow for the 64-bit x86 (x64) Windows kernel. Most considerations for KVA shadow on x64 also apply to 32-bit x86 kernels, though there are some divergences between the two architectures. This is due to ISA differences between 64-bit and 32-bit modes, particularly with trap and exception handling.

Please note that the implementation details described in this section are subject to change without notice in the future. Drivers and applications must not take dependencies on any of the internal behaviors described below without first checking for updated documentation.

The best way to understand the complexities involved with KVA shadow is to start with the underlying low-level interface in the kernel that handles the transitions between user mode and kernel mode. This interface, called the trap handling code, is responsible for fielding traps (or exceptions) that may occur from either kernel mode or user mode. It is also responsible for dispatching system service calls and hardware interrupts. There are several events that the trap handling code must handle, but the most relevant for KVA shadow are those called “kernel entry” and “kernel exit” events. These events, respectively, involve transitions from user mode into kernel mode, and from kernel mode into user mode.

Trap handling and system service call dispatching overview and retrospective

As a quick recap of how the Windows kernel dispatches traps and exceptions on x64 processors, traditionally, the kernel programs the current thread’s kernel stack pointer into the current processor’s TSS (task state segment), specifically into the KTSS64.Rsp0 field, which informs the processor which stack pointer (RSP) value to load up on a ring transition to ring 0 (kernel mode) code. This field is traditionally updated by the kernel on context switch, and several other related internal events; when a switch to a different thread occurs, the processor KTSS64.Rsp0 field is updated to point to the base of the new thread’s kernel stack, such that any kernel entry event that occurs while that thread is running enters the kernel already on that thread’s stack. The exception to this rule is that of system service calls, which typically enter the kernel with a “syscall” instruction; this instruction does not switch the stack pointer and it is the responsibility of the operating system trap handling code to manually load up an appropriate kernel stack pointer.

On typical kernel entry, the hardware has already pushed what is termed a “machine frame” (internally, MACHINE_FRAME) on the kernel stack; this is the processor-defined data structure that the IRETQ instruction consumes and removes from the stack to effect an interrupt-return, and includes details such as the return address, code segment, stack pointer, stack segment, and processor flags on the calling application. The trap handling code in the Windows kernel builds a structure called a trap frame (internally, KTRAP_FRAME) that begins with the hardware-pushed MACHINE_FRAME, and then contains a variety of software-pushed fields that describe the volatile register state of the context that was interrupted. System calls, as noted above, are an exception to this rule, and must manually build the entire KTRAP_FRAME, including the MACHINE_FRAME, after effecting a stack switch to an appropriate kernel stack for the current thread.

KVA shadow trap and system service call dispatching design considerations

With a basic understanding of how traps are handled without KVA shadow, let’s dive into the details of the KVA shadow-specific considerations of trap handling in the kernel.

When designing KVA shadow, several design considerations applied for trap handling when KVA shadow were active, namely, that the security requirements were met, that performance impact on the system was minimized, and that changes to the trap handling code were kept as compartmentalized as possible in order to simplify code and improve maintainability. For example, it is desirable to share as much trap handling code between the KVA shadow and non-KVA shadow configurations as practical, so that it is easier to make changes to the kernel’s trap handling facilities in the future.

When KVA shadowing is active, user mode code typically runs with the user mode address space selected. It is the responsibility of the trap handling code to switch to the kernel address space on kernel entry, and to switch back to the user address space on kernel exit. However, additional details apply: it is not sufficient to simply switch address spaces, because the only transition kernel pages that can be permitted to exist (or be “shadowed into”) in the user address space are only those that hold contents that are “safe” to disclose to user mode. The first complication that KVA shadow encounters is that it would be inappropriate to shadow the kernel stack pages for each thread into the user mode address space, as this would allow potentially sensitive, privileged kernel memory contents on kernel thread stacks to be leaked via the rogue data cache load speculative side channel.

It is also desirable to keep the set of code and data structures that are shadowed into the user mode address space to a minimum, and if possible, to only shadow permanent fixtures in the address space (such as portions of the kernel image itself, and critical per-processor data structures such as the GDT (Global Descriptor Table), IDT (Interrupt Descriptor Table), and TSS. This simplifies memory management, as handling setup and teardown of new mappings that are shadowed into user mode address spaces has associated complexities, as would enabling any shadowed mappings to become pageable. For these reasons, it was clear that it would not be acceptable for the kernel’s trap handling code to continue to use the per-kernel-thread stack for kernel entry and kernel exit events. Instead, a new approach would be required.

The solution that was implemented for KVA shadow was to switch to a mode of operation wherein a small set of per-processor stacks (internally called KTRANSITION_STACKs) are the only stacks that are shadowed into the user mode address space. Eight of these stacks exist for each processor, the first of which represents the stack used for “normal” kernel entry events, such as exceptions, page faults, and most hardware interrupts, and the remaining seven transition stacks represent the stacks used for traps that are dispatched using the x64-defined IST (Interrupt Stack Table) mechanism (note that Windows does not use all 7 possible IST stacks presently).

When KVA shadow is active, then, the KTSS64.Rsp0 field of each processor points to the first transition stack of each processor, and each of the KTSS64.Ist[n] fields point to the n-th KTRANSITION_STACK for that processor. For convenience, the transition stacks are located in a contiguous region of memory, internally termed the KPROCESSOR_DESCRIPTOR_AREA, that also contains the per-processor GDT, IDT, and TSS, all of which are required to be shadowed into the user mode address space for the processor itself to be able to handle ring transitions properly. This contiguous memory block is, itself, shadowed in its entirety.

This configuration ensures that when a kernel entry event is fielded while KVA shadow is active, that the current stack is both shadowed into the user mode address space, and does not contain sensitive memory contents that would be risky to disclose to user mode. However, in order to maintain these properties, the trap dispatch code must be careful to push no sensitive information onto any transition stack at any time. This necessitates the first several rules for KVA shadow in order to avoid any other memory contents from being stored onto the transition stacks: when executing on a transition stack, the kernel must be fielding a kernel entry or kernel exit event, interrupts must be disabled and must remain disabled throughout, and the code executing on a transition stack must be careful to never incur any other type of kernel trap. This also implies that the KVA shadow trap dispatch code can assume that traps arising in kernel mode already are executing with the correct CR3, and on the correct kernel stack (except for some special considerations for IST-delivered traps, as discussed below).

Fielding a trap with KVA shadow active

Based on the above design decisions, there is an additional set of tasks specific to KVA shadowing that must occur prior to the normal trap handling code in the kernel being invoked for a kernel entry trap events. In addition, there is a similar set of tasks related to KVA shadow that must occur at the end of trap processing, if a kernel exit is occurring.

On normal kernel entry, the following sequence of events must occur:

  1.  The kernel GS base value must be loaded. This enables the remaining trap code to access per-processor data structures, such as those that hold the kernel CR3 value for the current processor.
  2. The processor’s address space must be switched to the kernel address space, so that all kernel code and data are accessible (i.e., the kernel CR3 value must be loaded). This necessitates that the kernel CR3 value must be stored in a location that is, itself, shadowed. For the purposes of KVA shadow, a single per-processor KPRCB page that contains only “safe” contents maintains a copy of the current processor’s kernel CR3 value for easy access to the KVA shadow trap dispatch code. Context switch between address spaces, and process attach/detach update the corresponding KPRCB fields with the new CR3 value on process address space changes.
  3. The machine frame previously pushed by hardware as a part of the ring transition from user mode to kernel mode must be copied from the current (transition) stack, to the per-kernel-thread stack for the current thread.
  4. The current stack must be switched to the per-kernel-thread stack. At this point, the “normal” trap handling code can largely proceed as usual, and without invasive modifications (save that the kernel GS base has already been loaded).

Roughly speaking, the inverse sequence of events must occur on normal kernel exit; the machine frame at the top of the current kernel thread stack must be copied to the transition stack for the processor, the stacks must be switched, CR3 must be reloaded with the corresponding value for the user mode address space of the current process, the user mode GS base must be reloaded, and then control may be returned to user mode.

System service call entry and exit through the SYSCALL/SYSRETQ instruction pair is handled slightly specially, in that the processor does not already push a machine frame, because the kernel logically does not have a current stack pointer until it explicitly loads one. In this case, no machine frame needs be copied on kernel entry and kernel exit, but the other basic steps must still be performed.

Special care needs to be taken by the KVA shadow trap dispatch code for NMI, machine check, and double fault type trap events, because these events may interrupt even normally uninterruptable code. This means that they could even interrupt the normally uninterruptable KVA shadow trap dispatch code itself, during a kernel entry or kernel exit event. These types of traps are delivered using the IST mechanism onto their own distinct transition stacks, and the trap handling code must carefully handle the case of the GS base or CR3 value being in any state due to the indeterminate state of the machine at the time in which these events may occur, and must preserve the pre-existing GS base or CR3 values.

At this point, the basics for how to enter and exit the kernel with KVA shadow are in place. However, it would be undesirable to inline the KVA shadow trap dispatch code into the standard trap entry and trap exit code paths, as the standard trap entry and trap exit code paths could be located anywhere in the kernel’s .text code section, and it is desirable to minimize the amount of code that needs be shadowed into the user address space. For this reason, the KVA shadow trap dispatch code is collected into a series of parallel entry points packed within their own code section within the kernel image, and either the standard set of trap entry points, or the KVA shadow trap entry points are installed into the IDT at system boot time, based on whether KVA shadow is in use at system boot. Similarly, the system service call entry points are also located in this special code section in the kernel image.

Note that one implication of this design choice is that KVA shadow does not protect against attacks against kernel ASLR using speculative side channels. This is a deliberate decision given the design complexity of KVA shadow, timelines involved, and the realities of other side channel issues affecting the same processor designs. Notably, processors susceptible to rogue data cache load are also typically susceptible to other attacks on their BTBs (branch target buffers), and other microarchitectural resources that may allow kernel address space layout disclosure to a local attacker that is executing arbitrary native code.

Memory management considerations for KVA shadow

Now that KVA shadow is able to handle trap entry and trap exit, it’s necessary to understand the implications of KVA shadowing on memory management. As with the trap handling design considerations for KVA shadow, ensuring the correct security properties, providing good performance characteristics, and maximizing the maintainability of code changes were all important design goals. Where possible, rules were established to simplify the memory management design implementation. For example, all kernel allocations that are shadowed into the user mode address space are shadowed system-wide and not per-process or per-processor. As another example, all such shadowed allocations exist at the same kernel virtual address in both the user mode and kernel mode address spaces and share the same underlying physical pages in both address spaces, and all such allocations are considered nonpageable and are treated as though they have been locked into memory.

The most apparent memory management consequence of KVA shadowing is that each process typically now needs a separate address space (i.e., page table hierarchy, or top level page directory page) allocated to describe the shadow user address space, and that the top level page directory entries corresponding to user mode VAs must be replicated from the process’s kernel address space top level page directory page to the process’s user address space top level page directory page.

The top level page directory page entries for the kernel half of the VA space are not replicated, however, and instead only correspond to a minimal set of page table pages needed to map the small subset of pages that have been explicitly shadowed into the user mode address space. As noted above, pages that are shadowed into the user mode address space are left nonpageable for simplicity. In practice, this is not a substantial hardship for KVA shadow, as only a very small number of fixed allocations are ever shadowed system-wide. (Remember that only the per-processor transition stacks are shadowed, not any per-thread data structures, such as per-thread kernel stacks.)

Memory management must then replicate any updates to top level user mode page directory page entries between the two process address spaces, as any updates occur, and access bit handling for working set aging and other purposes must logically OR the access bits from both user and kernel address spaces together if a top level page directory page entry is being considered (and, similarly, working set aging must clear access bits in both top level page directory page if a top level entry is being considered). Similarly, memory management must be aware of both address spaces that may exist for processes in various other edge-cases where top-level page directory pages are manipulated.

Finally, no general purpose kernel allocations can be marked as “global” in their corresponding leaf page table entries by the kernel, because processors susceptible to rogue data cache load cannot observe any cached virtual address translations for any privileged kernel pages that could contain sensitive memory contents while in user mode, for KVA shadow protections to be effective, and such global entries would still be cached in the processor translation buffer (TB) across an address space switch.

Booting is just the beginning of a journey

At this point, we have covered some of the major areas involved in the kernel with respect to KVA shadow. However, there’s much more that’s involved beyond just trap handling and memory management: For example, changes to how Windows handles multiprocessor initialization, hibernate and resume, processor shutdown and reboot, and many other areas were all required in order to make KVA shadow into a fully featured solution that works correctly in all supported software configurations.

Furthermore, preventing the rogue data cache load issue from exposing privileged kernel mode memory contents is just the beginning of turning KVA shadow into a feature that could be shipped to a diverse customer base. So far, we have only touched on the basics of the highlights of an unoptimized implementation of KVA shadow on x64 Windows. We’re far from done examining KVA shadowing, however; a substantial amount of additional work was still required in order to reduce the performance overhead of KVA shadow to the absolute minimum possible. As we’ll see, there are a number of options that have been considered and employed to that end with KVA shadow. The below optimizations are already included with the January 3rd, 2018 security updates to address rogue data cache load.

Performance optimizations

One of the primary challenges faced by the implementation of KVA shadow was maximizing system performance. The model of a unified, flat address space shared between user and kernel mode, with page permission bits to protect kernel-owned pages from access by unprivileged user mode code, is both convenient for an operating system kernel to implement, and easily amenable to high performance user/kernel transitions.

The reason why the traditional, unified address space model allows for fast user/kernel transitions relates to how processors handle virtual memory. Processors typically cache previously fetched virtual address translations in a small internal cache that is termed a translation buffer, (or TB, for short); some literature also refers to these types of address translation caches as translation lookaside buffers (or TLBs for short). The processor TB operates on the principle of locality: if an application (or the kernel) has referenced a particular virtual address translation recently, it is likely to do so again, and the processor can save the costly process of re-walking the operating system’s page table hierarchy if the requisite translation is already cached in the processor TB.

Traditionally, a TB contains information that is primarily local to a particular address space (or page table hierarchy), and when a switch to a different page table hierarchy occurs, such as with a context switch between threads in different processes, the processor TB must be flushed so that translations from one process are not improperly used in the context of a different process. This is critical, as two processes can, and frequently do, map the same user mode virtual address to completely different physical pages.

KVA shadowing requires switching address spaces much more frequently than operating systems have traditionally done so, however; on processors susceptible to the rogue data cache load issue, it is now necessary to switch the address space on every user/kernel transition, which are vastly more frequent events than cross-process context switches. In the absence of any further optimizations, the fact that the processor TB is flushed and invalidated on each user/kernel transition would substantially reduce the benefit of the processor TB, and would represent a significant performance cost on the system.

Fortunately, there are some techniques that the Windows KVA shadow implementation employs to substantially mitigate the performance costs of KVA shadowing on processor hardware that is susceptible to rogue data cache load. Optimizing KVA shadow for maximum performance presented a challenging exercise in finding creative ways to make use of existing, in-the-field hardware capabilities, sometimes outside the scope of their original intended use, while still maintaining system security and correct system operation, but several techniques have been developed to substantially reduce the cost.

PCID acceleration

The first optimization, the usage of PCID (process-context identifier) acceleration is relevant to Intel Core-family processors of Haswell and newer microarchitectures. While the TB on many processors traditionally maintained information local to an address space, and which had to be flushed on any address space switch, the PCID hardware capability allows address translations to be tagged with a logical PCID that informs the processor which address space they are relevant to. An address space (or page table hierarchy) can be tagged with a distinguished PCID value, and this tag is maintained with any non-global translations that are cached the processor’s TB; then, on address space switch to an address space with a different associated PCID, the processor can be instructed to preserve the previous TB contents. Because the processor requires that the current address space’s PCID to match that of any cached translation in the TB for the purposes of matching any translation lookups in the TB, address translations from multiple address spaces can now be safely represented concurrently in the processor TB.

On hardware that is PCID-capable and which requires KVA shadowing, the Windows kernel employs two distinguished PCID values, which are internally termed PCID_KERNEL and PCID_USER. The kernel address space is tagged with PCID_KERNEL, and the user address space is tagged with PCID_USER, and on each user/kernel transition, the kernel will typically instruct the processor to preserve the TB contents when switching address spaces. This enables the preservation of the entire TB contents on system service calls and other high frequency user/kernel transitions, and in many workloads, substantially mitigates almost all of the cost of KVA shadowing. Some duplication of TB entries between user and kernel mode is possible if the same user mode VA is referenced by user and kernel code, and additional processing is also required on some types of TB flushes, as certain types of TB flushes (such as those that invalidate user mode VAs) must be replicated to both user and kernel PCIDs. However, this overhead is typically relatively minor compared to the loss of all TB entries if the entire TB were not preserved on each user/kernel transition.

On address space switches between processes, such as context switches between two different processes, the entire TB is invalidated. This must be performed because the PCID values assigned by the kernel are not process-specific, but are global to the entire system. Assigning different PCID values to each process (which would be a more “traditional” usage of PCID) would preclude the need to flush the entire TB on context switches between processes, but would also require TB flush IPIs (interprocessor-interrupts) to be sent to a potentially much larger set of processors, specifically being all of those that had previously loaded a given PCID, which in and of itself is a performance trade-off due to the cost involved in TB flush IPIs.

It’s important to note that PCID acceleration also requires the hypervisor to expose CR4.PCID and the INVPCID instruction to the Windows kernel. The Hyper-V hypervisor was updated to expose these capabilities with the January 3rd, 2018 security updates. Additionally, the underlying PCID hardware capability is only defined for the native 64-bit paging mode, and thus a 64-bit kernel is required to take advantage of PCID acceleration (32-bit applications running under a 64-bit kernel can still benefit from the optimization).

User/global acceleration

Although many modern processors can take advantage of PCID acceleration, older Intel Core family processors, and current Intel Atom family processors do not provide hardware support for PCID and thus cannot take advantage of that PCID support to accelerate KVA shadowing. These processors do allow a more limited form of TB preservation across address space switches, however, in the form of the “global” page table entry bit. The global bit allows the operating system kernel to communicate to the processor that a given leaf translation is “global” to the entire system, and need not be invalidated on address space switches. (A special facility to invalidate all translations including global translations is provided by the processor, for cases when the operating system changes global memory translations. On x64 and x86 processors, this is accomplished by toggling the CR4.PGE control register bit.)

Traditionally, the kernel would mark most kernel mode page translations as global, in order to indicate that these address translations can be preserved in the TB during cross-process address space switches while all non-global address translations are flushed from the TB. The kernel is then obligated to ensure that both incoming and outgoing address spaces provide consistent translations for any global translations in both address spaces, across a global-preserving address space switch, for correct system operation. This is a simple matter for the traditional use of kernel virtual address management, as most of the kernel address space is identical across all processes. The global bit, thus, elegantly allows most of the effective TB contents for kernel VAs to be preserved across context switches with minimal hardware and software complexity.

In the context of KVA shadow, however, the global bit can be used for a completely different purpose than its original intention, for an optimization termed “user/global acceleration”. Instead of marking kernel pages as global, KVA shadow marks user pages as global, indicating to the processor that all pages in the user mode half of the address space are safe to preserve across address space switches. While an address space switch must still occur on each user/kernel transition, global translations are preserved in the TB, which preserves the user TB entries. As most applications primarily spend their time executing in user mode, this mode of operation preserves the portion of the TB that is most relevant to most applications. The TB contents for kernel virtual addresses are unavoidably lost on each address space switch when user/global acceleration is in use, and as with PCID acceleration, some TB flushes must be handled differently (and cross-process context switches require an entire TB flush), but preserving the user TB contents substantially cuts the cost of KVA shadowing over the more naïve approach of marking no translations as global.

Privileged process acceleration

The purpose of KVA shadowing is to protect sensitive kernel mode memory contents from disclosure to untrusted user mode applications. This is required for security purposes in order to maintain privilege separation between kernel mode and user mode. However, highly-privileged applications that have complete control over the system are typically trusted by the operating system for a variety of tasks, up to and including loading drivers, creating kernel memory dumps, and so on. These applications effectively already have the privileges required in order to access kernel memory, and so KVA shadowing is of minimal benefit for these applications.

KVA shadow thus optimizes highly privileged applications (specifically, those that have a primary token which is a member of the BUILTINAdministrators group, which includes LocalSystem, and processes that execute as a fully-elevated administrator account) by running these applications only with the KVA shadow “kernel” address space, which is very similar to how applications execute on processors that are not susceptible to rogue data cache load. These applications avoid most of the overhead of KVA shadowing, as no address space switch occurs on user/kernel transitions. Because these applications are fully trusted by the operating system, and already have (or could obtain) the capability to load drivers that could naturally access kernel memory, KVA shadowing is not required for fully-privileged applications.

Optimizations are ongoing

The introduction of KVA shadowing radically alters how the Windows kernel fields traps and exceptions from a processor, and significantly changes several key aspects of memory management. While several high-value optimizations have already been deployed with the initial release of operating system updates to integrate KVA shadow support, research into additional avenues of improvement and opportunities for performance tuning continues. KVA shadow represents a substantial departure from some existing operating system design paradigms, and with any such substantial shift in software design, exploring all possible optimizations and performance tuning opportunities is an ongoing effort.

Driver and application compatibility

A key consideration of KVA shadow was that existing applications and drivers must continue to work. Specifically, it would not have been acceptable to change the Windows ABI, or to invalidate how drivers work with user mode memory, in order to integrate KVA shadow support into the operating system. Applications and drivers that use supported and documented interfaces are highly compatible with KVA shadow, and no changes to how drivers access user mode memory through supported and documented means are necessary. For example, under a try/except block, it is still possible for a driver to use ProbeForRead to probe a user mode address for validity, and then to copy memory from that user mode virtual address (under try/except protection). Similarly, MDL mappings to/from user mode memory still function as before.

A small number of drivers and applications did, however, encounter compatibility issues with KVA shadow. By and large, the majority of incompatible drivers and applications used substantially unsupported and undocumented means to interface with the operating system. For example, Microsoft encountered several software applications from multiple software vendors that assumed that the raw machine instructions in certain, non-exported Windows kernel functions would remain static or unchanged with software updates. Such approaches are highly fragile and are subject to breaking at even slight perturbations of the operating system kernel code.

Operating system changes like KVA shadow, that necessitated a security update which changed how the operating system manages memory and trap and exception dispatching, underscore the fragility of depending on highly unsupported and undocumented mechanisms in drivers and applications. Microsoft strongly encourages developers to use supported and documented facilities in drivers and applications. Keeping customers secure and up to date is a shared commitment, and avoiding dependencies on unsupported and undocumented facilities and behaviors is critical to meeting the expectations that customers have with respect to keeping their systems secure.

Conclusion

Mitigating hardware vulnerabilities in software is an extremely challenging proposition, whether you are an operating system vendor, driver writer, or an application vendor. In the case of rogue data cache load and KVA shadow, the Windows kernel is able to provide a transparent and strong mitigation for drivers and applications, albeit at the cost of additional operating system complexity, and especially on older hardware, at some potential performance cost depending on the characteristics of a given workload. The breadth of changes required to implement KVA shadowing was substantial, and KVA shadow support easily represents one of the most intricate, complex, and wide-ranging security updates that Microsoft has ever shipped. Microsoft is committed to protecting our customers, and we will continue to work with our industry partners in order to address speculative execution side channel vulnerabilities.

Ken Johnson, Microsoft Security Response Center (MSRC)

Viewing all 36188 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>