Quantcast
Channel: TechNet Blogs
Viewing all 36188 articles
Browse latest View live

Von MBR nach GPT ohne Datenverlust

$
0
0

Verwendete Akronyme werden am Ende des Artikels erklärt!

Express Summary für Techniker

Es gibt einige Sicherheitsfunktionen in Windows 10, die nur mit aktivem Secure Boot verwendet werden können. Leider lässt sich Secure Boot nicht einschalten, solange sich das UEFI im sogenannten „legacy BIOS“ Modus befindet. Wechselt man einfach in den „native UEFI“ Mode, kann das zuvor installierte Betriebssystem nicht mehr booten, da das moderne UEFI mit dem guten, alten MBR nicht viel anfangen kann.
Um den MBR nach GPT zu konvertieren, kann man auf eine Vielzahl von zusätzlich zu installierenden und/oder zu lizensierenden Drittherstellertools zurückgreifen, oder einfach das seit Windows 10 Version 1703 im Betriebssystem vorhandene Utility MBR2GPT.EXE verwenden.

Wann kann das hilfreich sein?

Die neueste Version unseres „Securing the Windows Client“ Trainings (https://www.microsoft.com/de-at/services/PremierWorkshops.aspx) enthält unter anderem Labs zum Thema Virtualization-based Security, d.h. wir implementieren unter anderem Credential Guard und Device Guard in virtuellen Hyper-V Maschinen. Damit das funktioniert müssen bestimmte Hardwarevoraussetzungen erfüllt sein und damit die passenden Rechner bereitstehen, habe ich diese vorab an den Learning Partner gemailt. Naiverweise habe ich in etwa folgende Formulierung verwendet: „…die Rechner müssen UEFI und Secure Boot unterstützen…“. Das ernüchternde Ergebnis war, dass die Rechner vor Ort UEFI-fähig waren, jedoch statt UEFI nur „Legacy BIOS“ aktiviert war, wodurch wiederum auch Secure Boot nicht aktiviert werden konnte.
Nun war guter Rat teuer, denn nach der Umstellung auf UEFI verlangt das System von einer GPT (GUID Partition Table) zu booten, während die Festplatten jedoch mit MBR konfiguriert waren. Die vorhandene Konfiguration beizubehalten hätte wiederum in nicht funktionierenden Labs geendet. Eine Neuinstallation mittels automatisiertem Setup des Schulungspartners hätte neuerlich eine MBR Partitionen erzeugt und war daher ebenfalls keine Option. Die Tools von Drittherstellern zu verwenden um die Partitionstabellen von MBR auf GPT umzuschreiben wäre ebenfalls kompliziert gewesen, hätte zu viel Zeit und in einigen Fällen auch eine entsprechende Software-Lizenz verlangt. Um den Beginn und den Ablauf des Workshops nicht zu gefährden musste eine schnelle Lösung her.

Kein Blog Artikel ohne Happy End

In diesem Fall tauchte die Rettung in Form eines kleinen Tools namens „MBR2GPT.EXE“ auf (seit Version 1706 in Windows 10 integriert). Wie der Name schon vermuten lässt, kann man damit eine MBR-Partition in eine GPT-Partition ohne Datenverlust umwandeln. Die Handhabung ist denkbar einfach: Man startet Windows 10, führt das Tool aus und fährt den Rechner herunter. Im BIOS aktiviert man dann UEFI und Secure Boot und siehe da – Windows 10 startet.
Eine Beschreibung des Tools sowie der Parameter findet man auf https://docs.microsoft.com/en-us/windows/deployment/mbr-to-gpt

Auch wenn ich dieses Tool inzwischen vielfach an verschiedensten Rechnern ausprobiert habe, kann man nie vollständig ausschließen, dass etwas schiefgeht. Ein vollständiges Backup ist daher Pflicht!

Folgendes Anwendungsbeispiel geht davon aus, dass der MBR der Disk 0 überprüft und anschließend konvertiert werden soll:

  1. Benötigt wird ein PowerShell Prompt mit erhöhten Rechten (aka „run as Administrator“)
  2. Anzeigen der Festplattendetails:
    Get-Disk | ft -Auto
  3. Überprüfen, ob eine Konvertierung möglich ist:
    mbr2gpt.exe /validate /disk:0
  4. Die tatsächliche Konvertierung wird mit folgendem Befehl gestartet,
    mbr2gpt.exe /convert /disk:0 /allowFullOS

    Das Tool wurde eigentlich dafür entworfen in einer Windows PE Umgebung zu laufen, mit der Option /allowFullOS lässt es sich in einer vollständigen Installation von Windows 10 starten.

  5. Rechner herunterfahren oder neu starten und UEFI sowie Secure Boot aktivieren.

UEFI, Secure Boot, MBR und GPT

Der MBR (Master Boot Record) ist ein Startprogramm mit Partitionstabelle für BIOS-basierte Computer.

Die GPT (GUID Partition Table) ist ein Standard für das Format von Partitionstabellen auf Datenträgern. Die GPT-Spezifikation ist ein Teil des UEFI (Unified Extensible Firmware Interface) Standards, welcher in modernen PCs das althergebrachte BIOS beinahe vollständig abgelöst hat. Aus Gründen der Abwärtskompatibilität unterstützt moderne Hardware heute üblicherweise beide Varianten, wobei standardmäßig das moderne UEFI aktiviert ist und die BIOS-Option meist als "Legacy BIOS" bezeichnet wird.

Während ein privater Anwender seinen neuen PC einfach vom USB-Stick installiert und weder von UEFI noch GPT etwas bemerkt, ist die Umstellung für Unternehmen mitunter nicht ganz einfach, denn es gilt die automatisierten Setup Prozesse anzupassen, da ein PC mit aktiviertem UEFI nicht von einer MBR Partition starten kann.

Secure Boot beschränkt das Booten eines Rechners auf vertrauenswürdig signierte Bootloader und stellt dadurch sicher, dass manipulierte Bootloader (aka "Bootkits") nicht gestartet werden können. Secure Boot wird ab Windows 8 unterstützt und setzt ein aktiviertes UEFI voraus.


Inspire 2017 Windows Server And Hybrid Cloud Session Recordings

$
0
0

Following up from the last post on Inspire's Office 365 session recordings, this post includes the links to the recorded sessions for topics related to Windows Server and hybrid cloud integration scenarios with Microsoft Azure, including Azure Stack. For those of you looking at how to integrate your customer's on-premises workloads with Azure offerings, or even those of you thinking about their cloud migration paths, there's sure to be something of value here for you. Make sure you take a look at  Windows Server Software Designed validated solutions as well, to see what hardware partners Microsoft is working with.

CE508 Realize the massive opportunity to modernize legacy .NET applications with containers and Windows Server

Organizations need trusted partners to help evolve and transform legacy and .NET applications for the cloud model. Inventory customer application environments, migrate apps into containers to gain operational efficiency, and innovate and evolve the application using modern development patterns.

Watch Video

CE202p Modernizing traditional apps using containers

Unlock multiple opportunities in one account using containers. In this customer case study, learn how to educate, advise, and walk customers through modernizing traditional apps using containers.

Watch Video

MSP10 Infrastructure: Remote Desktop Services (RDS): Why do you support 24×7 infrastructure for apps that run 8AM – 5PM?

With scripting or support from the ISVs, service providers can take advantage of the application and desktop usage patterns on Microsoft Azure. The RDS optimization and infrastructure scaling is native to cloud. It allows for 30%-70% ongoing infrastructure cost savings and drives margin expansion.

Watch Video

PRAC06p How to help your customer migrate their workloads to the Cloud

Come learn the strategy that Microsoft has developed to help customers migrate their application catalog and manage it in the cloud.

Watch Video

MSP01 Modern licensing for Microsoft Azure and Azure Stack

In this session we touch on modern licensing and how it will enable you to leverage Microsoft software and online services for your hybrid cloud solutions. We will also cover licensing for Azure and Azure Stack on CSP, including various hybrid (CSP, SPLA and VL) licensing scenarios.

Watch Video

MSIT01 Harness the power of the cloud to transform a global enterprise

Cloud computing continues to evolve the way we live and work. To help you navigate your own your journey to the cloud, we share the insights, expertise, and best practices gathered while building the intelligent cloud platform that powers Microsoft.

Watch Video

MSP04 Empowering digital transformation with hosting and managed service providers

In this session hear from Microsoft VP of WW Cloud & Hosting Service Providers on how Microsoft with its partners are empowering digital transformation with customers.

Watch Video

MSP07 The power of Azure Stack: Seize the hybrid Azure opportunity

Azure provides a rich platform for developers to modernize their applications. Most of those applications move to public cloud quickly, however some face technological and regulatory obstacles. Learn how Azure Stack can overcome obstacles and make hybrid cloud computing a reality for your customers.

Watch Video

MSIT06p Drive resource and cost efficiencies with Microsoft Azure optimization

Learn how Microsoft IT overcame widespread underutilization of resources after we moved 90% of our computing resources to Azure, resulting in 46% monthly savings and a 400% increase in processor utilization. Walk away with ideas and tools to drive efficiency in your own environment.

Watch Video

CE506 Extend your Microsoft Azure business with Azure Stack

Organizations are committing to hybrid as their long-term strategy for next-gen application innovation. Azure Stack adds an exciting new element for developing and operating new hybrid cloud solutions, expands your Azure practice, helping you reach new customers and deliver differentiated solutions.

Watch Video

MSP11 How to transform your business beyond your comfort zone

Learn how cloud service providers transform and grow their business. Panel discussion with three partners who have gone through significant transformation and evolved their portfolio together with Microsoft to serve the changing needs of customers.

Watch Video

CE512 Grow your cloud revenue with Microsoft Azure management and security offerings

As customers are moving to the cloud, accelerate their journey, and grow your revenue with Azure management and security services. Fill the gap in monitoring, security, and protection of your customers’ cloud infrastructure with a solution that works across hybrid environments.

Watch Video

 

Windows Server 2016 NTFS sparse file/Data Deduplication users: please install KB4025334

$
0
0

Hi folks,

KB4025334 prevents a critical data corruption issue with NTFS sparse files in Windows Server 2016. This helps avoid data corruptions that may occur when using Data Deduplication in Windows Server 2016, although all applications and Windows components that use sparse files on NTFS benefit from applying this update. Installation of this KB helps avoid any new or further corruptions for Data Deduplication users on Windows Server 2016. This does not help recover existing corruptions that may have already happened. This is because NTFS incorrectly removes in-use clusters from the file and there is no ability to identify what clusters were incorrectly removed after the fact. Although KB4025334 is an optional update, we strongly recommend that all NTFS users, especially those using Data Deduplication, install this update as soon as possible. This fix will become mandatory in the "Patch Tuesday" release for August 2017.

For Data Deduplication users, this data corruption is particularly hard to notice as it is a so called "silent" corruption - it cannot be detected by the weekly Dedup integrity scrubbing job. Therefore, KB4025334 also includes an update to chkdsk to help identify which files are corrupted. Affected files can be identified using chkdsk with the following steps:

  1. Install KB4025334 on your server from the Microsoft Update Catalog and reboot. If you are running a Failover Cluster, this patch will need to be applied to all nodes in the cluster.
  2. Run chkdsk in readonly mode (this is the default mode for chkdsk)
  3. For potentially corrupted files, chkdsk will report something like the following
    The total allocated size in attribute record (128, "") of file 20000000000f3 is incorrect.

    where 20000000000f3 is the file id. Note all affected file ids.
  4. Use fsutil to look up the name of the file by its file id. This should look like the following:

    E:myfolder> fsutil file queryfilenamebyid e: 0x20000000000f3
    A random link name to this file is [file://%3f/E:/myfolder/TEST.0]\?E:myfolderTEST.0

    where E:myfolderTEST.0 is the affected file.

We're very sorry for the inconvenience this issue has caused. Please don't hesitate to reach out in the comment section below if you have any additional questions about KB4025334, and we'll be happy to answer.

Tip of the Day: Windows 10 Tip - Change CMD/PowerShell transparency

$
0
0

Today's tip...

With CMD or PowerShell windows, hold down CTRL + Shift + Mouse Wheel up/down to change the transparency of the window instantly.

Partners: Thanks for joining us at Microsoft Inspire!

$
0
0

Last week in Washington D.C., we held Microsoft Inspire, our premier annual partner event. This year’s event was a huge success, with over 17,000 attendees joining us from 140 countries. A big thank you to all of the Enterprise Mobility + Security partners who came out to spend the week with us!

image

As we mentioned in our preview of Inspire, there were eight Enterprise Mobility + Security sessions at the event. For those of you that couldn’t make it, these sessions can be viewed on-demand here. We had many engaging conversations with our partners throughout the event, and there were a few key topics that we heard repeatedly:

  1. Excitement around Microsoft 365. During Monday morning’s keynote, Satya Nadella unveiled Microsoft 365, a new offering that brings together Office 365, Windows 10, and Enterprise Mobility + Security. The EMS partners that we spoke to were very excited about the potential of Microsoft 365, and were eager to learn more about the offering and start having conversations with their customers about how Microsoft 365 can help empower their digital transformations. Take a look at our recent blog post on EMS and Microsoft 365 to learn more.
  2. Security is top of mind. Many partners we spoke to were very enthusiastic to work with Microsoft to help keep our joint customers secure. Particularly, partners were interested in learning about One Microsoft Security, our unique approach to security and better understanding the power of the Microsoft Intelligent Security Graph. Partners left the event excited to discuss the power of the graph with their customers. Download the Security Practice Playbook to learn how to grow your security practice and transform your business.
  3. GDPR and Compliance. One item that came up repeatedly in conversations with our partners was the importance of GDPR and compliance. The GDPR enforcement date is now less than a year away, and many of our partners felt that their customers still had a lot of work to do to comply. Check out our GDPR Partner site to learn more about how can you partner with us to solve many of your customers’ GDPR and compliance needs.

Next year, Microsoft Inspire will be held in fabulous Las Vegas, Nevada. Register now to get your All Access pass. We’ll see you there!

Reporte de ganancias de Microsoft para el cuarto trimestre del año fiscal 2017

NPS e o account lockout "fantasma"

$
0
0

Olá a todos, me chamo Guilherme Pohlmann e faço parte da equipe de suporte Windows do Brasil. Estou aqui novamente para compartilhar outro cenário que enfrentei algumas semanas atrás e pode ajudá-los em suas
jornadas diárias de suporte. Aproveite a leitura!

 

Introdução 

Ah, account lockout... configuração que pode ser tanto um anjo da guarda como também uma pedra em seu sapato, se mal configurado.

 

De um modo geral não há tantos problemas em se descobrir a origem de um account lockout, salvo ocasiões em que ataques de força bruta são observados. Mesmo quando temos Exchange ou outros produtos envolvidos, cada ferramenta possui um log que nos possibilita a identificação de "credenciais ruins" e, o NPS não é uma exceção. Como podemos ver no artigo de Audit Network Policy Server, existe uma boa gama de eventos para monitorar as solicitações de acesso direcionadas a um servidor NPS. Para habilitar estas auditorias, basta aplicar a GPO de Audit Network Policy Server, encontrada em Computer ConfigurationPoliciesWindows SettingsSecurity SettingsAdvanced Audit Policy ConfigurationSystem Audit PoliciesLogon/Logoff.

 

Dentre os eventos que podem ser gerados, para análise de problemas de account lockouts, utilizamos, principalmente, o evento 6273, que nos informa que o servidor NPS negou o acesso a um usuário bem como o motivo da negação.

 

Antes de seguir falando sobre este evento, é importante, contudo, falar a respeito dos eventos 6279 e 6280, visto que tais eventos são referentes a account lockouts também, no entanto, estes eventos são referentes ao NPS Account Lockout. Embora ambas configurações - NPS Account Lockout
e Active Directory Account Lockout - tenham o mesmo objetivo: prevenir o
ambiente e/ou o usuário contra ataques de força bruta, ambos funcionam de uma maneira um pouco diferente, enquanto o Active Directory Account Lockout tem como consequência de tentativas de logon incorretas o bloqueio completo da conta do usuário no Active Directory, o NPS Account Lockout tem como objetivo impedir novas tentativas de autenticação no NPS, sem afetar a conta do usuário no Active Directory. Este artigo, no entanto, é focado no Active Directory Account Lockout.

Para entender como habilitar e como funciona o NPS Account Lockout, recomendo a leitura do artigo NPS: Account Lockout.

 

Nota importante: assumimo que antes de chegar nesta análise do servidor NPS, já tenha realizado análises de account lockout nos domain controllers através de logs de auditoria, depuração do netlogon.log, etc. Embora tenham sido liberadas há bastante tempo, as ferramentas de gerenciamento de Account Lockout ainda são muito úteis nesta análise.

 

De volta ao evento 6273, o seguinte exemplo de evento nos dá uma ideia de como a análise seria feita:

 

Log
Name:      Security

Source:
Microsoft-Windows-Security-Auditing

Date:
16/05/2017 14:33:14

Event
ID:      6273

Task
Category: Network Policy Server

Level:
Information

Keywords:
Audit Failure

User:
N/A

Computer:
NPS01.contoso.local

Description:

Network
Policy Server denied access to a user.

 

Contact
the Network Policy Server administrator for more information.

 

User:

Security
ID: S-1-5-21-3927881245-1022922358-2527905271-1117

Account Name: user1

Account Domain: contoso.local

Fully Qualified Account Name: CONTOSOuser1

 

Client
Machine:

Security
ID: NULL SID

Account
Name: -

Fully
Qualified Account Name: -

OS-Version:
-

Called Station Identifier: 08-cc-68-??-??-??

Calling Station Identifier: e8-50-8b-??-??-??

 

NAS:

NAS IPv4 Address:  192.168.110.200

NAS IPv6 Address:  -

NAS Identifier:   CONTOSONAS01

NAS Port-Type:   Wireless - IEEE 802.11

NAS Port:   1

 

RADIUS
Client:

Client
Friendly Name: CONTOSOAP02

Client
IP Address: 192.168.115.12

 

Authentication
Details:

Proxy
Policy Name:  Contoso-Wireless-Connections

Network
Policy Name:  Contoso-Wireless-Connections

Authentication
Provider:  Windows

Authentication
Server:  phcmisad02.PeninsulaAD.local

Authentication
Type: PEAP

EAP
Type: 29

Account
Session Identifier:  -

Reason Code: 34

Reason: The user account that is specified in the RADIUS
Access-Request message is disabled.

 

Se um usuário tem sua tentativa de acesso negada pelo NPS, espera-se que um evento 6273 seja gerado informando o motivo. Na amostra acima, podemos ver que o usuário user1 não conseguiu autenticar-se devido a um
erro cujo código é 34, ou, no caso, a conta do usuário está desabilitada.
Atente-se aos campos de Called Station Identifier e Calling Station Identifier. O Called Station Identifier informa o MAC Address do Wireless Controller ou Access Point que recebeu a credencial e a direcionou ao NPS enquanto o Calling Station Identifier informa o MAC Address do dispositivo que realizou a tentativa de autenticação, por exemplo, um laptop, smartphone, tablet, etc.

 

Nota: na amostra, ocultamos o sufixo dos MAC Addresses, entretanto, em seu ambiente encontrará a informação completa.

 

Outro log importante seria o evento 4625 (auditoria de Logon) que também é gerado no servidor NPS, conforme exemplo:

 

Log
Name:      Security

Source:
Microsoft-Windows-Security-Auditing

Date:
16/05/2017 14:33:14

Event
ID:      4625

Task
Category: Logon

Level:
Information

Keywords:
Audit Failure

User:
N/A

Computer:
NPS01W.contoso.local

Description:

An
account failed to log on.

 

Subject:

Security
ID: SYSTEM

Account
Name: NPS01$

Account
Domain: CONTOSO

Logon
ID: 0x3e7

 

Logon Type: 3

 

Account
For Which Logon Failed:

Security
ID: NULL SID

Account Name: user1

Account Domain: CONTOSO

 

Failure
Information:

Failure Reason: Your user account is disabled.

Status:        0xc000006d ---> STATUS_LOGON_FAILURE

Sub Status: 0xc0000072 ---> STATUS_ACCOUNT_DISABLED

 

Process
Information:

Caller
Process ID: 0x3e4

Caller
Process Name: C:WindowsSystem32svchost.exe

 

Network
Information:

Workstation Name:

Source
Network Address: -

Source
Port: -

 

Detailed
Authentication Information:

Logon
Process: CHAP

Authentication
Package: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0

Transited
Services: -

Package
Name (NTLM only): -

Key
Length: 0

 

This event is generated when a logon request fails. It is generated on the computer
where access was attempted.

 

The Subject fields indicate the account on the local system which requested the
logon. This is most commonly a service such as the Server service, or a local
process such as Winlogon.exe or Services.exe.

 

The Logon Type field indicates the kind of logon that was requested. The most
common types are 2 (interactive) and 3 (network).

 

The Process Information fields indicate which account and process on the system
requested the logon.

 

The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.

 

The authentication information fields provide detailed information about this
specific logon request.

- Transited services indicate which intermediate services have participated in this logon request.

- Package name indicates which sub-protocol was used among the NTLM protocols.

- Key length indicates the length of the generated session key. This will be 0 if no session key was requested.

 

Observe, novamente, os campos destacados. Temos o nome do usuário, seu domínio, o motivo da falha, entre outras informações. Quero comentar dois campos específicos aqui: Logon Type e Workstation Name.
Como podemos ver, o Logon Type nos retorna o valor 3, que equivale a Network Logon e, geralmente, está relacionado a autenticação NTLM. A tabela com
todos os valores possíveis pode ser encontrada no artigo de Auditoria de eventos de logon. Quanto ao campo de Workstation Name, pode estar se perguntando: por que não retornou o nome do dispositivo? E a resposta envolve sistemas operacionais de terceiros. Logs do event viewer e netlogon tendem a não retornar o hostname do dispositivo quando estamos trabalhando com Sistemas Operacionais de terceiros, visto que o campo de hostname do dispositivo não é encontrado da mesma maneira que encontramos no Windows.

 

Por falar em netlogon.log, é importante dizer que o log em questão é extremamente útil para análises de problema de autenticação (inclusive problemas de account lockout), entretanto, é necessário manter em
mente que o netlogon.log registrará informações referentes a logon de rede (Logon Type 3) e autenticações NTLM. Para outros tipos de logon e autenticação
Kerbero, por exemplo, é necessário focar no Event Viewer.

 

Por padrão a depuração do netlogon não é habilitada, porém, pode ser facilmente configurada através do comando nltest /dbflag:2080FFFF, ou, para um nível mais detalhado de informações, nltest /dbflag:26FFFFFF. Para quem deseja habilitar a depuração de forma massiva e ainda sim evitar a utilização de scripts, temos uma GPO disponível para isso: Specify log file debug output level, em Computer ConfigurationPoliciesAdministrative TemplatesSystemNet logon, o valor na GPO precisa ser inserido em sua forma decimal, ou seja, para 2080FFFF, deve-se utilizar o valor 545325055 enquanto que para 26FFFFFF, deve-se utilizar 654311423.

 

Com a depuração habilitada, o netlogon passará a gerar mais informações, em especial, informações referentes a tentativas de logon, sejam elas com sucesso ou não.

 

Por exemplo, para nosso usuário de testes, user1, as seguintes informações foram geradas no servidor NPS:

 

05/16 14:33:14 [LOGON] SamLogon: Network logon of CONTOSOuser1 from  Returns
0xC0000072

05/16 14:33:18 [LOGON] SamLogon: Network logon of CONTOSOuser1 from  Returns
0xC0000072

05/16 14:33:20 [LOGON] SamLogon: Network logon of CONTOSOuser1 from  Returns
0xC0000072

05/16 14:33:22 [LOGON] SamLogon: Network logon of CONTOSOuser1 from  Returns
0xC0000072

05/16 14:33:29 [LOGON] SamLogon: Network logon of CONTOSOuser1 from  Returns
0xC0000072

 

Novamente, o código 0xC0000072 refere-se ao erro STATUS_ACCOUNT_DISABLED.
A lista de códigos mais comuns pode ser encontrada no artigo de Quick
Reference: Troubleshooting Netlogon Error Codes
. Perceba que não é retornado o nome do dispositivo que enviou a credencial, pois trata-se de um dispositivo com sistema operacional de terceiros. Quando tratamos de um dispositivo Windows, o log gera informações como essa:

 

05/16 14:39:23 [LOGON] SamLogon: Network logon of CONTOSOuser1 from WindowsPhone01 Returns 0xC0000072

 

Até aqui parece fácil, mas e se o usuário estiver sendo bloqueado e não termos eventos gerados no servidor NPS?

 

 

 

O problema: as autenticações "fantasma"

 

Recentemente trabalhei em um incidente com um cliente que possuia as auditorias de NPS habilitadas, entretanto, ainda que o evento 4673 fosse gerado, não haviam eventos para o usuário que era afetado. Para preservar a privacidade de nosso cliente, algumas informações da análise serão censurados ou modificados.

 

Novamente, tínhamos um usuário sendo bloqueado após diversas tentativas de autenticação utilizando uma senha errada (0xC000006A) contra o servidor NPS, entretanto, essas tentativas de autenticação não eram visíveis nos logs do NPS, apenas nos eventos de Logon e no netlogon.log. Logo, optamos por deixar uma captura de rede rodando por algumas horas no servidor NPS, até que o usuário fosse bloqueado, e então avaliar com mais calma de onde essas credenciais partiam.

 

Para esta captura de rede, utilizamos o netsh, através do comando netsh trace start capture=yes tracefile=C:tempnetsh.etl maxsize=300 scenario=netconnection. Vale ressaltar que esta captura poderia ter sido realizada com outras ferramentas, por exemplo o Network Monitor 3.4
ou o WireShark, entretanto, o netsh trata-se de uma ferramenta nativa a partir do Windows 7 e Windows Server 2008 R2, que não demanda a instalação de qualquer pacote adicional.

 

Com o resultado da captura em mãos, partimos para a análise.

 

Inicialmente imaginei que um simples filtro em cima do protocólo RADIUS seria suficiente, visto que me permitiria analisar todo o processo de Accounting realizado pelo NPS:

 

Entretanto, não havia qualquer accounting sendo realizado para a conta de nosso usuário, que chamaremos de mydomainms-user. Para fins de conhecimento, a política de Active Directory Account Lockout desde cliente estava configurada para bloquear usuários caso houvessem 5 tentativas de logon incorretas em um intervalo de 1 hora e, no netlogon.log do NPS encontramos o seguinte:

 

05/16 14:07:09 [LOGON] SamLogon: Network logon of MYDOMAINms-user from  Returns
0xC000006A

05/16 14:19:18 [LOGON] SamLogon: Network logon of MYDOMAINms-user from  Returns
0xC000006A

05/16 14:29:03 [LOGON] SamLogon: Network logon of MYDOMAINms-user from  Returns
0xC000006A

05/16 14:29:17 [LOGON] SamLogon: Network logon of MYDOMAINms-user from  Returns
0xC000006A

05/16 14:29:31 [LOGON] SamLogon: Network logon of MYDOMAINms-user from  Returns
0xC000006A

 

Assim, obtivemos 5 tentativas de autenticação com senha incorreta (0xC00006A) em um intervalo de 1 hora.

 

De volta à captura de rede e confirmado que um filtro em cima do protócolo em questão não seria suficiente, optei por alterar o filtro para procurar especificamente o nome do usuário, visto que todo processo de accounting e autenticação RADIUS realizado pelo NPS deve retornar um nome de usuário no pacote, assim, o filtro utilizado foi  RADIUS.Attributes.AttributeUserName.UserName == "username":

 

 

Logo, encontrei todo tráfego que envolvia nosso usuário mydomainuser2,
inclusive, os timestamps das conversas que combinavam com os timestamps
encontrado no netlogon.log. Vale ressaltar que o dispositivo que estava
enviando as credenciais ao NPS era um dos Wireless Controllers do cliente,
entretanto, este Wireless Controller apenas estava fazendo o trabalho de
direcionar a credencial para validação. Expandindo os frames encontrei o
verdadeiro dispositivo que envia as credenciais:

 

 

Todos os pacotes de autenticação enviados ao NPS, estavam sendo solicitados por um dispositivo cujo MAC Address possui o prefixo E8-50-8B, ou seja, um dos prefixos utilizados pela  Samsung. Este dispositivo pode ser um tablet ou smartphone, dificilmente um laptop, visto que se fosse e possuísse Windows instalado, provavelmente teriamos seu hostname listado nos logs do netlogon ou mesmo no Event Viewer.

 

 

 

Conclusão

Daqui em diante a análise deve seguir por parte do cliente junto à sua equipe de redes, de modo a analisar informações do lado de seu Wireless Controller ou Access Points em busca do IP de origem da credencial, ou simplesmente filtrar o MAC Address identificado.

 

 

 

Agradecimentos

Gostaria de agradecer ao Renato Pagan e ao Bruno Portela por sua ajuda inestimável ao longo deste incidente. Não só forneceram informações sobre alguns conceitos do NPS, mas também foram importantes ouvintes em momentos em que um brainstorm se fez necessário.

EnglishmansDentist Exploit Analysis

$
0
0

Introduction

We are continuing our series of blog posts dissecting the exploits released by ShadowBrokers in April 2017. After the first two posts about the SMB exploits known as EternalChampion and EternalSynergy, we’ll move this time to analyze a different tool and we’ll focus on the exploit named EnglishmansDentist designed to target Exchange Server 2003.

EnglishmansDentist targets Exchange 2003 mail server through a rendering vulnerability present in a shared library provided by the underlying (out-of-support) operating system Windows Server 2003, which is used by Exchange 2003 in its default configuration.

Newer operating systems (Windows Server 2008 and above) and more recent versions of Exchange Server (2007 and above) are not impacted by this exploit and so no action is needed for customers using these newer platforms.

As previously announced on MSRC blog, after considering the availability of ready-to-use weaponized code and the assessment of the threat landscape, Microsoft decided to release in June an extraordinary update for out-of-support platforms (Windows XP and Server 2003) to protect customers who were not able to update to newer products.

This blog post will deep-dive into the root cause of the vulnerability, the impact on Microsoft products, the exploitation methods and how modern mitigations can break such exploits in newer operating systems and products.

Overview

The root cause of this vulnerability is a memory corruption bug in the code of a shared library (OLECNV32.DLL), used to render images encoded with the old file format known as QuickDraw PICT. This graphic library is present by default on Windows XP and Windows Server 2003. Exchange Server 2003 uses this graphic library to render PICT content delivered in form of email attachments. So, while the underlying bug exists in the operating system, the attack vector used to reach the vulnerable code is an Exchange rendering routine called through OLE invocation and triggered with a specially crafted email attachment.

When Microsoft security engineers analyze a vulnerability in a shared component such as a graphic library, multiple investigation workstreams are initiated to answers two very important questions:

  • what products still in support may use or distribute the vulnerable shared library?
  • is the source code of the vulnerable library copied or re-used in other components?

For the first question, we determined that the vulnerable version of OLECNV32.DLL library is shipped and present on disk only on out-of-support platforms like Windows Server 2003 and Windows XP, with the first one being of interest as default platform for an Exchange Server 2003 installation. After research on affected platforms and possible installation combinations of Exchange Server, we came up with the following matrix that can help to understand which combination of products are mostly exposed at risk of exploitation by EnglishmansDentist.

Exchange Server 2007 product is not affected by this attack because the graphic rendering engine no longer uses OLECNV32.DLL library to render PICT images, even if the library may be present on disk (uncommon case with Windows Server 2003 + Exchange 2007). Newer versions of Exchange Server such as 2010 and 2013 are not impacted by this bug and so they won’t be considered.

Regarding the source code investigation, we tracked how the PICT vulnerable function was integrated and re-used in certain older versions of Office no longer in support. During this investigation, we were happy to see that while this bug was initially copied by developers into a graphic filter for Office, the same bug was later identified by Microsoft security reviews and fuzzing and it was internally fixed back in 2006 as part of the increased effort in finding vulnerability initiated by Microsoft during that time. This example represents a good story of bug collisions, where a weaponized exploit used silently by an attacker may be killed by internal pentesting and fuzzing efforts of the vendor.

It is probable that EnglishmansDentist was initially written before 2005 because the exploit seems to not work properly (ends with a crash) when tested against an Exchange Server 2003 SP2 and it has only 32-bit operating system targets, probably because 64-bit architecture was not popular enough a decade ago.

Exploit requirements and delivery mechanism

EnglishmansDentist requires the attacker to have at least one valid mail account on the target Exchange 2003 mail server (username and password). In fact, the exploit will run first a series of validation and checks to make sure that the valid account can login and check mails successfully. The exploit requires also a secondary email account (spoofed or real) used as source which will send the malformed PICT attachment to the valid account.

After the delivery of the malicious PICT attachment to the target mail server, the tool will login with the valid account credentials and force Exchange Server to parse and render the malicious attachment using one of the many protocols available (OWA, IMAP, POP3). Because the rendering code is executed on the

server-side, successful exploitation will result in execution of arbitrary code in the context of an Exchange Server process running with SYSTEM privileges.

After exploitation, EnglishmansDentist remains in listening mode waiting for the shellcode to connect back. When this happens, the tool instructs the Exchange server to delete the malicious email which delivered the exploit, removing forensic evidences of the attack.

The vulnerability: CVE-2017-8487

In order to understand the vulnerability, readers must be familiar with PICT graphic specifications and with the opcodes defined by this file format. Some references to parse this old file format are still available online here, here and here. Another good reference with details of the internal PICT opcode parsing code is also available here.

When testing the exploit against Exchange Server 2003 SP2, we observed the following crash in our test environment; we included in this blog only information and modules relevant for the analysis of this vulnerability, marking in red attacker’s controlled frames and in yellow interesting function names

Application exception occurred:
App: C:Program FilesExchsrvrbinstore.exe (pid=2288)
When: 4/15/2017 @ 00:27:11.078
Exception number: c0000005 (access violation)

*----> System Information <----*
Computer Name: XXX
User Name: SYSTEM
Terminal Session Id: 0
Number of Processors: 1
Processor Type: x86 Family 6 Model 62 Stepping 4
Windows Version: 5.2
Current Build: 3790
Service Pack: 2

*----> Module List <----*
0000000000400000 - 000000000091c000: C:Program FilesExchsrvrbinstore.exe
[...]
000000006d580000 - 000000006d628000: C:WINDOWSsystem32dbghelp.dll
0000000071db0000 - 0000000071dbc000: C:WINDOWSsystem32OLECNV32.DLL

eax=4a85c948 ebx=00094850 ecx=00000000 edx=00000020 esi=000949c0 edi=4a85ca0c
eip=6d8b1cfd esp=4a85c738 ebp=4a85ca50 iopl=0 nv up ei pl nz na po nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010202
6d8b1cfd ?? ???

-> ROP gadget targeting DBGHELP.DLL + 0x00081cfd

0:057> kp
# ChildEBP RetAddr
WARNING: Frame IP not in any known module. Following frames may be wrong.
00 4a85c734 77c0f329 0x6d8b1cfd
01 4a85ca50 77c0f282 gdi32!iAnsiCallback+0x9d
02 4a85ca84 77c0f480 gdi32!EnumFontsInternalW+0x111
03 4a85cab4 77c265dc gdi32!EnumFontsInternalA+0x68
04 4a85cad4 71db2a0b gdi32!EnumFontsA+0x1a
05 4a85caf8 71db31c7 olecnv32!EnumFontFunc+0x3d1
06 4a85cb08 71db38d9 olecnv32!GdiOpenMetafile+0x335
07 4a85cb14 71db6290 olecnv32!GdiTextOut+0x22
08 4a85cc98 71db71c4 olecnv32!QDCopyBytes+0xd40
09 4a85ccb0 71db1375 olecnv32!QDConvertPicture+0xb4
0a 4a85ccc0 77760418 olecnv32!QD2GDI+0x3d
0b 4a85cd28 776fd79a ole32!QD2GDI+0xac
0c 4a85cd58 776cccf3 ole32!UtGetHMFFromMFStm+0x71
0d 4a85cd80 776ccb21 ole32!CMfObject::Load+0x4d
0e 4a85cdb4 776cc96a ole32!CCacheNode::Load+0xc5
0f 4a85ce60 007628f9 ole32!COleCache::Load+0x1b2
10 4a85ce80 00762e9b store!IMAGESTM::HrGetViewObject+0x87
11 4a85ce8c 0076304a store!IMAGESTM::HrAttachToImage+0x16
12 4a85ce98 0076320c store!IMAGESTM::HrEnsureImage+0x10
13 4a85cea8 6225dce2 store!IMAGESTM::Stat+0xf
14 4a85ced4 62258840 exmime!CStreamLockBytes::Stat+0x4a
15 4a85d0ac 6225d911 exmime!CBodyStream::HrInitialize+0x107
16 4a85d0f0 6225e4ca exmime!CMessageBody::GetDataHere+0xd4
[...]
 

As immediately visible from this callstack trace, the vulnerability exists inside the routine QD2GDI() exported by OLECNV32.DLL. This function is responsible for converting and rendering QuickDraw images and it’s used by Exchange Server 2003 “store.exe” process. The routine is called when an attachment for example is automatically parsed through OWA while reading new incoming emails; the attack surface of this parser is reachable through OLE32.

The internal code of QD2GDI() has a memory corruption bug while parsing a LongComment record, normally identified by opcode 0xA1. An attacker can exploit this bug by creating a malformed PICT file with a LongComment record containing a PP_FONTNAME sub-record with a fontName string greater than 32 bytes which triggers a memory corruption with an out-of-bound overwrite of a fixed size variable.

A malformed PICT image produced by EnglishmansDentist will have a similar layout as follow.

The image will always start with two hardcoded headers. One is used to integrate the PICT image into the TNEF OLE container (mail attachment format used by Exchange), and the second one represents a proper PICT header. The two static headers are immediately followed by a dummy tag TxFont record and by the vulnerable LongComment record which will trigger the memory corruption overwrite.

The preparation of the malicious PICT file is done programmatically in EnglishmansDentist in two routines at offset 0x404621 and 0x404650. It’s done by assembling the static headers followed by multiple PICT records, including the malformed 0xA1 opcode and other records used to deliver a ROP chain and the encrypted shellcode payload.

The decoding of the header and the records performed by QD2GDI() will immediately hit the malformed 0xA1 opcode and trigger the vulnerability.

//inner parsing done by OLECNV32!QD2GDI()
private void TranslateOpcode( opcodeEntry far * currOpcodeLPtr )
{
    Word  function = currOpcodeLPtr->function;   
    /* perform appropriate action based on function code */
    switch (function)                                                                                                    
        {
            [...]
            case LongComment:         // opcode 0xA1      
            {  
                [...]          
                /* determine what should be done with the comment */      
                switch (comment)         
                { 
                    [...]              
                    case picAppComment:         //comment 0064             
                    {
                        [...]                 

                        /* determine what to do with the function specified */
                        switch (realFunc)               
                        {                  
                            case PP_FONTNAME:             //function 0011                 
                            {                     
                                Byte     fontFamily;                     
                                Byte     charSet;                     
                                Byte     fontName[32];     //fixed-szie string buffer (32 bytes)                       

                                /* font name from GDI2QD - read the LOGFONT info */                     
                                GetByte( &fontFamily );                     
                                GetByte( &charSet ); 

                                //”fontName” parameter is attacker’s controlled (malformed size)
                                //GetString()does not validate max size (32) and will blindly use it to copy
                                //resulting in an out-of-bound overwrite                      

                                GetString( fontName );                          

                                length = 0;                       

                                // call Gdi module to override font selection                    
                                GdiFontName( fontFamily, charSet, fontName );

As mentioned earlier, this bug was found internally by Microsoft when this code was ported and integrated in some older versions of Office. Therefore the function GetString() was modified many years ago to require the caller to pass the length of the buffer and enforce checks to avoid overwriting data out-of-bound, neutralizing this vulnerability in every possible place. 

Exploitation: easy job without mitigations

Unfortunately, it’s trivial to exploit a good out-of-bound overwrite bug inside an environment like Windows Server 2003 which lacks fundamental mitigations like ASLR and CFG. On Windows Server 2003, DEP is easily bypassed by the attacker because of lack of ASLR. Without the randomization of ASLR in memory, the attacker can use a pre-calculated ROP chain to call VirtualAlloc and next transfer the shellcode into the newly allocated executable buffer to run without problems.

The exploit first triggers the memory corruption vulnerability using a malformed 0xA1 record and it uses the out-of-bound overwrite to corrupt internal OLECNV32 structures hosting other objects. Specifically, the exploit targets a font entry in the global fontTable[] array which later is copied to gdiEnv structure and can be used to overwrite a function pointer and take control of execution.

The following memory dump captured during exploitation shows an example of the fontTable[] array where some entries were corrupted by the memory overwrite caused by the vulnerable GetString() function. It is possible to spot in the fontTable[] the malformed data coming from the PICT file and the offset used as initial ROP gadget (0x6D8B1CFD) marked in red.

After the corruption of fontTable[], the exploit leverages parsing of other PICT opcodes to trigger further interactions with the recently malformed font entry. This will lead OLECNV32 to do one more string copy operation to copy the malformed font into OLECNV32!gdiEnv data structure as shown in the following code snippet (fontTable[newFont] was malformed and is now controlled by the attacker controlled).

   if (GdiAttribHasChanged( GdiTxFont ))
    {      
        Integer  newFont;        

        /* call the routine to find a matching GDI font face name */      
        newFont = FindGdiFont();        

        /* fill in information from the font lookup table */      
        gdiEnv.newLogFont.lfPitchAndFamily = fontTable[newFont].family | (Byte)DEFAULT_PITCH;        

        /* copy the correct font character set */      
        gdiEnv.newLogFont.lfCharSet = fontTable[newFont].charset;
       
        /* copy over the new font face name */      
        lstrcpy( gdiEnv.newLogFont.lfFaceName, fontTable[newFont].gdiName );  

    [...]

This final string copy operation will result in overwriting a function pointer that can be leveraged by the attacker as callback to take control later when the EnumFonts function is called to enumerate fonts.

0:000> dt olecnv32!gdiEnv    
    +0x000 metafile         : 0xffffffff`b8662029 Void   
    +0x004 newLogBrush      : tagLOGBRUSH   
    +0x010 newLogFont       : tagLOGFONTA   
    +0x04c newLogPen        : tagLOGPEN   
    +0x05c clipRect         : tagRECT   
    +0x06c drawingEnabled   : 0n1   
    +0x070 sameObject       : 0n0   
    +0x074 useGdiFont       : 0n0   
    +0x078 hatchIndex       : 0n-1   
    +0x07c lastPattern      : [8]  ""   
    +0x084 lastPatType      : 0n0   
    +0x088 lastFgColor      : 0   
    +0x08c lastBkColor      : 0   
    +0x090 infoContext      : 0x00000000`7c0133d1 HDC__   
    +0x094 fontFunction     : 0x00000000`71db263a     <---overwritten with 0x6D8B1CFD   
    +0x098 state            : [24]  ""

Exploitation: ROP chains for English, German, Korean and Chinese OS

The exploit uses ROP gadgets built on top of DBGHELP.DLL library, which is normally loaded into the memory space of Exchange Server store.exe process. It is possible that the first version of the exploit was instead developed using OLECNV32.DLL gadgets. Even with the lack of ASLR randomization, obtaining reliable and universal exploitation of this vulnerability is not immediate, because DBGHELP.DLL is a language-dependent library (multiple versions exist for different OS languages). This introduces certain variance across different versions of Windows Server 2003. 

The attacker solved this problem by pre-calculating the correct offsets for each OS version they were interested in targeting. In fact, the configuration XML file included in EnglishmansDentist contains ROP gadgets developed for Windows Server 2003 in English but also for German, Korean, Simplified Chinese, and Traditional Chinese, disclosing all the potential targeted platforms of attacker’s interest.

The decoding of the ROP gadgets configured in EnglishmansDentist will map to these code blocks of DBGHELP.DLL module.

The first gadget executed by the function pointer overwrite (0x6d8b1cfd) will do some stack alignment and re-balancing EBP (add 0x1A0) and then transfer control to the full ROP chain using a combination of LEAVE/RET instructions. The full ROP chain (as seen in memory) is shown below with the equivalent gadgets on the right. It’s a short ROP chain which allocates executable memory to bypass DEP (0x8888 bytes) and copies into this region additional shellcode (egghunter) which is responsible for running the final backdoor payload with SYSTEM privilege (as granted by the Exchange Server process under attack). 

0:057> r
eax=4a85c948 ebx=00094850 ecx=00000000 edx=00000020 esi=000949c0 edi=4a85ca0c
eip=6d8b1cfd esp=4a85c738 ebp=4a85ca50 iopl=0         nv up ei pl nz na po nc
cs=001b  ss=0023  ds=0023  es=0023  fs=003b  gs=0000             efl=00010202
6d8b1cfd ??              ??? 

0:057> dps ebp+1a0+4
4a85cbf4  6d849568 ;pop ecx/retn
4a85cbf8  6d831104  ;ptr kernel32!VirtualAlloc
4a85cbfc  6d88c464  ;mov eax,dword[ecx]/retn
4a85cc00  6d85f71d  ;jmp eax
4a85cc04  6d849568  ;pop ecx/retn
4a85cc08  00000000  ;VirtuaAlloc_arg1(lpAddress)
4a85cc0c  00008888  ;VirtuaAlloc_arg2(dwSize)
4a85cc10  00001000  ;VirtuaAlloc_arg3(MEM_COMMIT)
4a85cc14  00000040  ;VirtuaAlloc_arg4(PAGE_RWX)
4a85cc18  a4f3f88b  ;code "mov edi,eax/rep movs byte [edi],[esi]"
4a85cc1c  6d893f8b  ;mov dword ptr [eax],ecx/ret8
4a85cc20  6d849568  ;pop ecx/retn
4a85cc24  72b8130f  
4a85cc28  534c1b8a
4a85cc2c  00000031  ;size of movs byte (egghunter)
4a85cc30  6d843b71  ;pop esi/retn
4a85cc34  71d096fc
4a85cc38  6d85f71d  ;jmp eax

Detection and Mitigations

As we mentioned earlier, Windows Server 2003 lacks fundamental mitigations developed in the last decade of security enhancements of Microsoft products. Because of ASLR, CFG and other mitigations, a similar bug in a modern operating system like Windows 10 Creators Update or Windows Server 2016 will be much more difficult to remotely exploit. Also, the introduction of integrity levels and containers (sandbox), allowed Microsoft to constrain certain graphic rendering components to minimize the damage in case of a parsing vulnerability like the one discussed today (e.g. Office Protected View, AppContainer for browsers, Font sandbox for fonts rendering).

Finally, these days the evolution of security checks in Microsoft compilers and the extensive use of fuzzing can find and eliminate similar bugs in the source code before they ship in the products, reducing the entire class of bugs at the source.

We are providing a YARA signature which can be used to detect the PICT images delivered in emails by EnglishmansDentist for customers still running Windows Server 2003. 

rule PICT_ENGLISHMANEDENTIST
{   
    strings:    
        $hdr  = {0B B0 00 00 00 00 00 64 00 64 11 01}    
        $tag  = {03 15 00 A1 00 64 01 00 50 50 4E 54 00 11 00 4D}    
        $rop1 = {FF FF 01 01 FD 1C}    
        $rop2 = {68 95 ?? ?? 04 11 ?? ?? 64 C4 ?? ?? 1D F7 ?? ?? 68 95}    

        condition: 
            $hdr and #tag>5 and $rop1 and $rop2 and filesize<100KB
}

As always, we recommend that customers use the latest and newest OS and Microsoft products, so as to benefit from the security enhancements and mitigations against exploits added at each iteration.

Final Words

I’d like to extend a thank you to Matt Miller (MSRC), Ben Faull (Office Security) and Brent Alinger (Office 365) for their notes and help provided in analyzing this exploit.

Elia Florio,
Windows Defender ATP Research Team


Inspire 2017 Office 365 Session Recordings

$
0
0

Following up from the last post on Inspire’s Windows Server And Hybrid Cloud session recordings, this post includes the links to the recorded sessions for Office 365 sessions. If you particularly interested in the Microsoft 365 Business session, check out this post as well. With Office 365 migrations becoming more prevalent with partners of all sizes, it's important to take a look at some of the workloads outside of Exchange Online to see what other capabilities your customers can take advantage of.

OFC04 Get ready to profit from the growing demand for Office 365 security and compliance capabilities

 Today, many Office 365 customers aren’t using the service’s security and compliance features. Office 365 Advanced Security and Compliance solutions can help you build a more profitable practice. Learn from the experts, and walk away with practical advice you can act on immediately.

Watch Video

OFC06 Microsoft Workplace Analytics: Deepen engagement, improve productivity, win deals

 Office 365 Workplace Analytics transforms digital exhaust into actionable insights that enable managers to maximize their organizations’ time and resources. Discover how Workplace Analytics enhances businesses and helps partners win deals by enhancing their existing solution sets.

Watch Video

OFC07 Skype for Business in Office 365: Tools and resources to grow your managed services practice

As customers continue to adopt Skype for Business in Office 365, the demand for managed services increases. Learn how the Skype Operations Framework provides tools and resources to help you take advantage of these new opportunities to build a sustainable managed services practice.

Watch Video

OFC08 Unleash the right Office 365 collaboration services for every customer

As the universal toolkit for collaboration, Office 365 supports the unique needs of customer teams. Learn how to differentiate Office 365 collaboration offerings to chart the best solutions for your customers. And find out how Microsoft is evolving the collaboration opportunity.

OFC09 Microsoft Teams: the latest addition to the Microsoft collaboration story opens new opportunities to expand your practice

Flat organizations. Distributed teams. Mixed generations and workstyles. The modern workplace demands new tools and new ways of working. Learn how Microsoft Teams—the chat-based workspace in Office 365—can help business culture evolve as it creates a profitable opportunity for your practice.

Watch Video

OFC10 SharePoint innovations create new partner opportunities in the connected workplace

New SharePoint innovations can empower people and organizations to transform business processes, engage employees, and harness collective knowledge. Find out how customers are creating connected workplaces. And discover how these innovations create even more opportunities for your practice.

Watch Video

OFC12 Build modern business solutions on the Office 365 Platform

The latest Office platform additions—including Teams and new Microsoft Graph capabilities—are enabling developers to create solutions that adapt Office 365 to the needs of 100 million commercial users. Understand how these enhancements can enrich your packaged solutions and development practices.

Watch Video

OFC14 New Microsoft investments in the Cloud Solution Provider (CSP) platform can accelerate your Office 365 practice

The value proposition for Office 365 on the Cloud Solution Provider platform has never been greater. Learn how you can take advantage of Microsoft’s latest investments across products and programs to accelerate your cloud business. And to expand the breadth of services you can offer your customers.

Watch Video

OFC15 Skype for Business in Office 365: Target voice and meeting opportunities that fuel high-value customer communications and collaboration

Skype for Business delivers world-class experiences across voice, video, and meetings to deliver new customer value and opportunities to grow your practice. Find out how to target communications and collaboration scenarios. And how other partners are developing platform offering and services.

Watch Video

MSP02p Build with Office 365: Modern meetings

Leverage Modern Meetings to double your ARPU from new as well as existing customers. This session explores what SMBs are currently paying for their web conferencing needs, how they evaluate a web conferencing offer, Microsoft's value proposition, and go to market.

Watch Video

MSP06 Maximize your Office 365 profitability by investing in value added services and vertical offers

Understand how investing in value added services and vertical offers can increase your Office 365 CSP profitability and create customer stickiness - enabling you to drive sales and build customer trust, cross-sell cloud workloads, and maximize your margins.

Watch Video

MSP09 Grow your SMB business with the power of 3

Harness the power of 3 with Office 365: Sell, Upsell and Cross Sell. Come learn about the opportunities to develop high value solutions for your SMB customers.

Watch Video

CE145 Enable collaboration through intelligent and secure enterprise video management with Microsoft Stream

Microsoft’s new enterprise video service is here! Learn about the latest in collaborative video for Office 365 and beyond. Get a sneak peek into the roadmap and partner opportunities that enable customers to securely power intelligence in video workloads within their organization.

Watch Video

 

 

Continuous Deployment to Azure Automation DSC Part 4

$
0
0

Picking this series up after a break. Last time I mentioned I wanted to be able to add multiple configurations and ensure they compiled as well. What I did to my source directory was simply clone the sample configuration I've made. As Azure Automation DSC treats them as different configurations I just kept all the settings for the configuration exactly the same. I copied the configuration and commit it to the dev branch to begin the build.

I'll use a pull request to pull the contents of my configtesting branch into the dev branch. In VSTS go to the code section and ensure the correct branch is selected. Then I just click on Create a Pull Request.

Make sure the correct details are filled in and then click Create (I could add reviewers if I wanted someone to check the code or link a work item to the request)

I can then go and complete the pull request.

The build gets automatically triggered (by a change to the dev branch) and completes successfully without me having to change anything. If I check the artifact which is created I now have my SecondConfigurationDEV.ps1 file in there ready for publishing to the Azure Automation service.

I can go and manually trigger the release for the DEV configurations.

And when it is complete I can see the new DEV configuration in Azure Automation DSC.

Not too hard? Well it is about to get hard.... I still need to work out how to actually test the configurations before releasing them - they can compile which is fine but will they actually work in my environment and I still need to handle credentials. I can do that using Azure Automation DSC but I'm concerned about the time to actually register the node and test the configuration. In the next post I'll try and use an on premises build agent to actually run and test the configurations can apply.

マイクロソフト パートナーネットワーク 年会費改定について 【7/21 更新】

$
0
0

マイクロソフト パートナーネットワークでは、毎年年会費の見直しをさせていただいております。

この度、2017年10月1日より、以下の通り価格が改定されることとなりましたのでご案内をさせていただきます。

 

従来、クラウド コンピテンシーについては、特別価格を設定させていただいておりましたが、そちらの設定も終了となり、すべてのコンピテンシー一律の価格体系となりますことを併せてご案内させていただきます。

なお、年会費のお支払いを含むコンピテンシーの更新手続きにつきましては、契約満了日の90日前より実施が可能です。契約満了日が10月以降のパートナー様におかれましても、9月30日までに弊社の着金確認ができるタイミングで手続きをいただけますと、旧価格が適用となりますので、お早目にご対応くださいますようお願いいたします。

ご不明な点がございましたら、パートナーコールセンターまでお問い合わせください。

 

Exchange Online ユニファイド メッセージングでのセッション ボーダー コントローラーのサポート終了

$
0
0

(この記事は 2017 7 18 日に Exchange Team Blog に投稿された記事 Discontinuation of support for Session Border Controllers in Exchange Online Unified Messaging の翻訳です。最新情報については、翻訳元の記事をご参照ください。)

2018 年 7 月に、サード パーティ製 PBX システムから Exchange Online ユニファイド メッセージング (UM) への接続に使用するセッション ボーダー コントローラー (SBC) のサポートを終了します。この変更は、標準的な Exchange や Skype for Business のプロトコルを使用して、より高品質なボイス メールを提供するための措置です。このシナリオの新規展開を検討中のお客様は、後述するいずれかの移行作業を今後 1 年未満の間に実施していただく必要がありますのでご注意ください。既に展開されているサービスは、2018 年 7 月までは、オンプレミス版 Exchange のボイス メール対応メールボックスからの移行やボイス メール対応メールボックスの新規作成などを含め、完全なサポートが提供されます。

今回の変更では、以下の構成は影響を受けません

  • Exchange Online UM に接続された Skype for Business Server (オンプレミス)
  • Exchange Online のメールボックスへのボイス メール メッセージのデポジットに SBC 接続ではなく API を使用するサード パーティ製ボイス メール ソリューション
  • あらゆる形式の Exchange Server UM (オンプレミス)

影響を受けるお客様は、2018 年 7 月までに下記のいずれかの代替ソリューションに移行する必要があります。

  • オプション 1: サード パーティ製オンプレミス PBX から Office 365 クラウド PBX に完全に移行する。
  • オプション 2: サード パーティ製オンプレミス PBX からオンプレミス版 Skype for Business Server エンタープライズ VoIP に完全に移行する。
  • オプション 3: サード パーティ製 PBX と Skype for Business の混在環境のお客様の場合、Skype for Business Server の PBX にマイクロソフト パートナーのコネクタを使用して接続し、このコネクタを通じて Exchange Online UM を使用する。この目的では、TE-SYSTEMS の anynode UM コネクタなどを使用できる。
  • オプション 4: Skype for Business Server を展開していないお客様、または上記の対策を取ることが適切でないお客様の場合、サード パーティ製ボイス メール システムを実装する。

今回の変更で影響を受けるお客様はわずかですが、ボイス メール プラットフォームの移行には、どの方法を選択するかを吟味し実装するまでに一定の時間を要します。このため、移行準備を早めに進めていただくことをお勧めします。詳細については、下記のページを参照してください。

今回の変更に関してご不明な点がありましたら、Office 365 Tech Community (英語) にてご質問ください。

Exchange チーム

※ 本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

SharePoint Foundation 2010 で検索サービスを構成した直後にカスタム列の検索結果が返されない事象について

$
0
0

こんにちは。SharePoint サポートの清です。

 

今回の投稿では、SharePoint Foundation 2010 の環境でカスタム列の検索結果が返されない事象についてご案内いたします。

SharePoint Foundation 2010 では検索機能として、SharePoint Foundation Search V4 (SPSearch4) サービスがインデックス、クエリの役割を提供しております。今回は、このサービスをご利用いただく際に発生する場合がある事象です。

なお、SharePoint Server 2010 の環境で Search Service Application を作成することでご利用いただける検索機能は SharePoint Server Search 14 (OSearch14) サービスで実現されており、本事象には該当いたしません。

 

現象

ユーザーがカスタム列のデータを検索すると、SharePoint Foundation 2010 で検索サービスが構成された直後に期待どおりの結果が返されません。

 

原因

検索サービスが構成された直後は、検索スキーマはまだ完全に読み込まれていません。

検索スキーマを参照できない場合、クロールされたプロパティは管理プロパティに正しくマップされません。

その結果、カスタム列のコンテンツは検索インデックスに書き込まれず、ユーザーは検索結果ページで期待される結果が得られない事象が発生します。

 

補足)

検索スキーマには、クロールされたプロパティから管理プロパティへのマッピングおよび管理プロパティの設定が含まれており、クロールされたプロパティと管理されたプロパティがどのようにマップされるかを制御するために使用されます。

管理プロパティのみが検索インデックスに保持されるので、コンテンツがクロールされた後、コンテンツの内容やメタデータを検索できるようにするには、管理プロパティにクロールされたプロパティをマップする必要があります。

 

解決方法

この問題を解決するには、システム管理者が検索サービスを構成した後にサービスの再起動とフル クロールを実施します。

 

  1. 管理者として SharePoint Foundation 2010 のサーバーにログオンし、OS のサービス コンソールを起動します。
  2. [SharePoint Foundation Search V4 (SPSearch4)] を選択して再起動します。
  3. 管理者としてコマンド プロンプトを起動し、以下のディレクトリに移動します。
    C:Program FilesCommon FilesMicrosoft SharedWeb Server Extensions14BIN
  4. フルクロールを開始するには、以下のコマンドを実行します。
    Stsadm -o spsearch -action fullcrawlstart

 

詳細

以下の手順で問題を再現することができます。

 

  1. SharePoint Foundation 2010 をインストールします。
  2. サーバーの全体管理サイトから検索サービスを開始します。
  3. サイト コレクションを作成し、ライブラリにカスタム列を追加します。
  4. ライブラリに任意の情報を入力して項目を保存します。この情報は、アイテムを検索するためのキーワードとして使用します。
  5. 最初のクロールが終了するまで待ってから、カスタム列の文字列で検索し、検索結果ページを確認します。

 

状況

マイクロソフトは、この問題をマイクロソフト製品の問題として認識しています。

 

対象製品

この資料は以下の製品について記述したものです。

Microsoft SharePoint Foundation 2010

 

 

お手数ではございますが、同様の事象が確認された場合には上記の対処策を実施くださいますようお願いいたします。

今回の投稿は以上です。

 

※ 本情報の内容は、作成日時点のものであり、予告なく変更される場合があります。

The Explanimators: Unbegrenzte Möglichkeiten mit dem Internet of Things

$
0
0

Was haben ein Computer und eine Zahnbürste gemeinsam? Als nebeneinander existierende Gegenstände zugegebenermaßen recht wenig. Aber: Durch das Internet of Things (IoT) werden beide Objekte eng miteinander verbunden. Damit sind sie Teil eines Netzwerks aus mehreren Millionen Geräten.

Aber was ist – und vielmehr – was kann IoT? Der Begriff bezeichnet die Vernetzung beliebiger Gegenstände sowohl untereinander als auch nach außen hin mit dem Internet. Dafür werden die Objekte mit Prozessoren und eingebetteten Sensoren ausgestattet, sodass sie via IP-Netz miteinander kommunizieren können. Somit sind sie in der Lage, den Besitzer mit allen denkbaren Informationen zu versorgen und selbstständig Aufgaben ausführen.

Für uns gehört IoT neben künstlicher Intelligenz (KI) zu den Schlüsseltechnologien der digitalen Transformation. Wir bieten ein breites Angebot für die Vernetzung im Internet der Dinge sowie für das Generieren, Analysieren und Visualisieren von Daten. Damit treiben wir die Verschmelzung der realen und digitalen Welt aktiv voran.

Weitere The Explanimators-Episoden:

Buzzword "Künstliche Intelligenz (KI)" - was heißt das konkret?


Ein Beitrag von Sydney Loerch
PR/Communications Intern

A Deep Dive into Dynamic Group Calculation and How it Affects SCOM Performance

$
0
0

 

I would first like to give a special thanks to John Mckeown and Nick Masiuk, both of whom provided major contributions to the work described below.

For those that know SCOM, it isn’t necessarily the fastest application out there, particularly with console performance.  Troubleshooting that can be painful, though it can typically be traced to adding some additional resources at some place in the environment.  Needless to say, group calculation is not the first thing one thinks of.  That said, in this case, we were working on some rather perplexing performance issues that simply could not be explained away by adding more RAM or using faster disks, as these properties were already well above Microsoft’s recommendations. 

Over the course of troubleshooting, we used the following script. What it effectively does is query a few SQL dynamic views for its most intensive transactions so you can see what types of transactions are taking up the majority of your time.  Before you take this and start using it, there are a few things worth noting:

  1. This is a bit expensive script in terms of resources. We did this testing mainly against development environments. It will work in a production environment, but if you can do your tests in a non-production environment, this would be better.
  2. There are some configurable parameters in blue. The DB name is strait forward, but this is only grabbing the last 10 minutes worth of transactions and only looking for transactions that run 2 or more time. These are configurable by editing the values accordingly.
  3. I’m not quite sure how this grabs data from SQL, but there does seem to be a lag between when a transaction is done and when it starts showing in the dynamic views we are querying.

Declare @oltp_db_name as varchar(128);

-- ======================================

-- PARAMETERS --

set @oltp_db_name = 'OperationsManager';

-- ======================================

select * from (

select db.value AS dbid,

COALESCE(DB_NAME(st.dbid),

DB_NAME(CAST(db.value as int))+'*') AS DBNAME

, qs.execution_count

, qs.total_worker_time as cpu_time

, qs.last_worker_time as last_execution_cpu_time

, qs.total_worker_time/qs.execution_count as avg_cpu_time

, substring(st.text, (qs.statement_start_offset/2)+1

, ((case qs.statement_end_offset

when -1 then datalength(st.text)

else qs.statement_end_offset

end - qs.statement_start_offset)/2) + 1) as statement_text

, qs.last_execution_time

, qs.creation_time as plan_created_on

, qp.query_plan

, qs.total_elapsed_time/1000.0/1000.0 as total_elapsed_time_sec

, (qs.total_elapsed_time/qs.execution_count)/1000.0/1000.0  as avg_elapsed_time_sec

, qs.last_elapsed_time/1000.0/1000.0 as last_elapsed_time_sec

, qs.min_elapsed_time/1000.0/1000.0 as min_elapsed_time_sec

, qs.max_elapsed_time/1000.0/1000.0 as max_elapsed_time_sec

, qs.total_clr_time, qs.last_clr_time

, qs.min_clr_time, qs.max_clr_time

,qs.plan_handle

,qs.sql_handle

from sys.dm_exec_query_stats as qs

cross apply sys.dm_exec_query_plan (qs.plan_handle) as qp

cross apply sys.dm_exec_sql_text(qs.sql_handle) as st

outer apply sys.dm_exec_plan_attributes (qs.plan_handle) as db

where qs.last_execution_time between dateadd(mi,-10,getdate()) and getdate()

and qs.execution_count >= 2

and db.attribute = 'dbid'

and db.value = (select db_id(@oltp_db_name))

) a

--where statement_text like '%from Cnsmr_accnt_ar_log this_ left outer join Cnsmr_accnt ca5_ on this_.cnsmr_accnt_id=ca5_.cnsmr_accnt_id inner join Usr%'

--where statement_text like '%INSERT TEXT SEARCH HERE IF YOU WANT TO CHECK SPECIFIC CALCUlATIONS%'

order by total_elapsed_time_sec desc

 

Below is an example of how the output looks in my lab:

image

This is for the most part, healthy. You can see from the total_elapsed_time_sec and avg_elapsed_time_sec columns that very few transactions have excessive run times. The top transaction, in my case, is the only thing consistently taking a while to run. You can further note that the average and last run values are very close to 300 seconds, which is consistent with an operation that is timing out. The statement_text column can be used to see what SQL is running.  While you won’t see the values of variables that are being passed, you will see the SQL statements being run, which allowed us to isolate what was going on. The hi-lighted transaction in my screenshot is a CLR call that is being generated by a view.

In our case, we saw a number of transactions hitting that magic 300 second value, most of which were CLR calls, and most of which are like the one I have highlighted on line 28. I’m not a SQL expert, to be clear, but CLR times (as explained to me) are essentially transactions coming from an external source to SQL (aka. the SCOM console in the highlighted example, which uses SQL CLR types as a dependency).  Looking at the statement text, we were seeing that these transactions were group calculation requests or views which were being filtered by the same groups. While we couldn’t see the specific data being requested, we could see enough logic behind it to see that both groups and views scoped by the same groups were the offending queries, eventually allowing us to start doing some isolation work. Here’s what we learned:

  1. Certain types of dynamic logic behave differently in terms of their performance.  This is not a surprising when you think about it, but a close look showed us that the difference was in orders of magnitude. Using an ‘equals’ statement in a group calculation generates a straight SQL query, but using more advanced operations such as contains, matches wildcard, or matches regular expression generates a CLR type call using the same DBO function (matches_regular_expression). In terms of performance, the results were much more noticeable.  We saw these transaction times increasing exponentially (in our case by powers of 10).  Unfortunately, these calls are using the same function on the back end, so none of them had a noticeably better performance hit.
  2. This was further complicated by the way the matches_regular_function stored procedure worked. Calls to the DB essentially called this function for each condition in the group calculation (i.e. if you had 10 OR conditions, it would loop through the entire data set using the  matches_regular_expression stored procedure 10 times).
  3. This is further exacerbated by large groups. To me, this is more or less common sense, but it is one thing to get an intellectual understanding of this compared to seeing the differences in real world implementations.  This was a large environment, and the groups in question were pulling a large number of contained objects from a large number of servers. As the group size increased, this was causing these queries to consistently time out (and by proxy it appeared that they were being re-run). Multiple filters/or statements saw exponentially higher results, as it appears that the SQL calls loop back through the data with each piece of criteria.  Naturally, this makes large groups with multiple inclusion/exclusion criteria to be very expensive.
  4. Likewise, because it is a CLR call, the call is essentially initiated against the SQL server from the management server. This means that network bandwidth issues can have a direct effect on group calculation (as well as the corresponding config generation with overrides targeted at a group).
  5. Not to be outdone, views that were scoped by these groups essentially redrew the group membership as part of the view. We could see that by looking at the statement text for the views, and we saw the same group query being included as a part of the view’s query.  Effectively, this means that the same query is being run both for the group calculation as well as the view associated with said group. If you’ve ever had a large view that does not populate all of the objects in it, I suspect (though I haven’t proved yet) that the ultimate cause is these CLR queries timing out.

Ultimately, I don’t think many smaller environments have much to be concerned about, as generating groups with many objects is fairly difficult to do. However, in a large environment, it is worth noting that size of your dynamic groups likely matters, as does the logic you use to call them. We didn’t test every type of call, but we were able to confirm that simple calls seem to avoid using CLR altogether.  Naturally, they also run faster. As for larger groups running against larger data sets, we had a few lessons learned.

  1. Consider writing registry discoveries to populate classes. Like groups, this can go overboard as SCOM does budget on the number of classes defined per monitored agent and what not. A few classes in place of large groups would be beneficial. The same logic to create a group could be used with a product such as SCCM DSC to populate a registry key in an organization specific portion of the registry. A SCOM discovery could be written that defines and populates a SCOM class based on value of this key. There are a few best practices to consider. First, classes should not change frequently, which means these keys should be static. An occasional change isn’t a deal breaker, but frequent changes are a problem.  Second, don’t go overboard with this. Creating a few custom classes to get rid of a few expensive groups is fine, but creating classes for the sake of creating them without any consideration for how they are used is a poor way to manage the environment. A group is technically a class, but by using a registry key to define a class instead of a group, you eliminate the need for group calculation and move some of this work to the DB instead of the management server.
  2. When defining groups, use the most narrow class possible. This is already a best practice, but simply put, don’t use windows computer to define a subset of domain controllers. Use the windows domain controller class. This gives SCOM a much smaller subset of data to search through, thus reducing load for group calculations. Obviously, there will be scenarios where a broader class is needed. I’m not saying to never do this, but there is value in putting some thought into what you are trying to accomplish, how to do it, and whether or not your solution is efficient.  Efficiency matters in large environments.
  3. For larger data sets, try and find ways to reduce the number of OR statements or a broader string for contains, matches wildcard, or matches regular expression checks. The string “ABCDE” will return less results than “ABCD” and if you need to throw a few excluded items in there, it will be more efficient in a large data set if you can avoid running additional iterations of that stored procedure.
  4. For smaller dynamic groups, use the equals value, as it effectively gets rewritten to a simple SQL statement. This allows for a direct query to get the exact data needed instead of a more expensive query that has to loop through the entire data set.


Add Multi-Line Content to a Linux File with Add-Content

$
0
0

You have a large number of files to edit that are used by Linux servers. The editing and distributing of these files has to be done on a Windows platform. You want to append some multi-line detail to these files with PowerShell, but the file still needs to be compatible with Linux.

Now, Add-Content with a here string would seem to be a good way to go.

 


$a = @"
#TITLE OF LINE
here is the Unix stuff
"@

Add-Content -Value $a -Path .linux

 

Unfortunately, because of the way Windows handles a new line, when this file is processed by Linux you'll see ^M at the end of each line in the file. What's going on? Well, the ^M represents 0x0D or carriage return, which you may also know as r or `r. When you press Enter in Windows you get a new line `n (or n or 0x0A or ^J) AND a carriage return, i.e. nr. This mimics the behaviour of a type writer. In the Unix world you only get the newline character.

How to add a Linux friendly entry with Add-Content?

 


$a = "`n#TITLE OF LINE`nhereis theunixstuff`n"

Add-Content -Value $a -Path .linux

In Notepad it will appear as a single line, however, in Linux you'll have the desired multi-line output.

 

 

PowerShell: Download Documents/Files from SharePoint OnPrem

$
0
0

$Spsite = get-spsite http://contoso.com #Replace this with Site Collection URL

$Location = "C:destination" #Replace with Folder on file level system where you want to download the files and folders

$AllWebs = $SpSite.AllWebs


function DownloadFiles ($SiteFolder , $Web , $List)
{
    $Web = Get-Spweb $Web
    $list = $web.Lists.GetList($list,$true)
    $items = $list.Items
        foreach($item in $items)
        {

        $FileBinary = $item.File.OpenBinary()
        $FileStream = New-Object System.IO.FileStream($SiteFolder + "/" + $item.url), Create
        $FileWriter = New-Object System.IO.BinaryWriter($FileStream)
        $FileWriter.write($FileBinary)
        $FileWriter.Close()

        }


}


function CreateSiteListFolder($Folder,$Web)
{
$SiteFolder = $Folder
$Web = Get-spweb $Web
$Lists = $Web.Lists|where{$_.BaseType -eq "DocumentLibrary" -and $_.Hidden -eq $false}


    foreach($List in $Lists)
    {

        $ListUrl = $list.RootFolder.ServerRelativeUrl
        $ListUrl = $ListUrl.Split("/")
        $ListUrl = $ListUrl[$ListUrl.Count -1]
        $ListUrl = $ListUrl.ToString()
        $CreateListRootFolder = $SiteFolder + "/" + $ListUrl
        $CreateListFolder = New-Item -Path $CreateListRootFolder -type directory -ErrorAction SilentlyContinue

        $ListFolders = $List.Folders
        foreach($ListFolder in $ListFolders)
        {
            $CreateListFolder = $SiteFolder + "/" + $ListFolder.Url
            $CreateLFolder = New-Item -Path $CreateListFolder -type directory -ErrorAction SilentlyContinue
        }

        DownloadFiles $SiteFolder $Web.url $list.id

    }

}


function Sitecreatefolder($AllWebs)
{


    foreach($Web in $AllWebs)
    {
        if($web.IsRootWeb)
        {
        $RootFolder = $Location + "" + $web.title
        $CreateRootFolder = New-Item -Path $RootFolder -type directory -ErrorAction SilentlyContinue
        CreateSiteListFolder $RootFolder $Web.url


        }
        else
        {
        $Folder = $Web.Url.ToString()
        $Folder = $Folder.Replace($Spsite.RootWeb.Url,$RootFolder)
        $CreateSiteFolder = New-Item -Path $Folder -type directory -ErrorAction SilentlyContinue
        CreateSiteListFolder $Folder $Web.url
        }

    }


}


Sitecreatefolder $AllWebs

Manage Network Bandwidth Across Active Directory Infrastructure with QoS Policy

$
0
0

You can use QoS Policy as a central point of network bandwidth management across your entire Active Directory infrastructure by creating QoS profiles, whose settings are distributed with Group Policy.

Policy-based QoS is the network bandwidth management tool that provides you with network control - based on applications, users, and computers.

Policy-based QoS takes advantage of your existing management infrastructure, because Policy-based QoS is built into Group Policy. You can apply QoS policies to a user login session or a computer as part of a Group Policy object (GPO) that you have linked to an Active Directory container, such as a domain, site, or organizational unit (OU).

QoS traffic management occurs below the application layer, which means that your existing applications do not need to be modified to benefit from the advantages that are provided by QoS policies.

QoS Policy is supported in Windows Server operating systems from 2008 through Windows Server 2016.

For more information, see the Windows Server 2016 Technical Library document set Quality of Service (QoS) Policy.

Inspire 2017 Education Focused Session Recordings

$
0
0

Last week at Ignite there were several education focused sessions at Ignite, and rather than being focused on individual products instead they covered a variety of technologies and solutions. This included Intune for Education, which not only introduced an easier approach to managing Windows devices, but also introduced some new device licensing options through the CSP program.

WIN12 Win K-12 education business with new Microsoft education products

Watch exciting demos on how teachers can deliver better learning outcomes using new Microsoft education products. Come see Windows 10, Windows 10 S, Office 365 for Education, School Data Sync, Microsoft Teams, and Intune for Education in action.

Watch Video

IND07 Education networking lunch and FY18 strategy and priorities

This luncheon is dedicated to education partners, provides opportunities for networking with fellow partners from around the globe, and includes Anthony Salcito’s kick-off keynote articulating Microsoft’s FY18 education strategy and priorities

Watch Video

IND08 Go-to-market (GTM) with education: A new model for sales

This session will detail Microsoft’s Education GTM strategy in a cloud first, mobile first education world. Learn how you can align and benefit from the evolution of the Authorized Education Partner program and how we will approach partner recruit, onboarding, enablement and co-selling.

Watch Video

IND09 Innovation and solutions in education

This session walks through a set of specific solution examples – e.g., analytics, student lifecycles, and modern classroom – and outlines how to engage by geography, and by product group, to maximize the sales engines that will be deployed to drive solution selling in education.

Watch Video

IND10 Product innovations addressing education needs

Recent product announcements increase opportunities for edu partners: Windows, Office 365, School Data Sync, Teams, and Intune for Education are delivering differentiated value to edu institutions around the globe. Learn the latest about these technologies and how to grow your business with them.

Watch Video

 

 

Inspire 2017 IoT Session Recordings

$
0
0

In the local OEM team our focus area doesn't really include IoT devices, but that doesn't mean that we don't get asked about Microsoft's big picture strategy for IoT. Here I've collected the IoT related sessions from last week's Inspire event which hopefully answer some of those questions, and at worst I can always point people back to this post when they have questions.

WIN07 Microsoft's vision for IoT

Understand Microsoft's vision for IoT and learn how to address top customer challenges -- from connectivity to security to scalability. Differentiate your offerings by realizing the full value of the Microsoft IoT portfolio, ranging from Advanced Analytics, Dynamics, Windows IoT and more.

Watch Video

ISV09p How ISV applications can become part of device-led IoT solutions

An ever-increasing number of partners need to come together to deliver an IoT solution to an organization. Attend this session to understand how ISVs play a crucial part to bring device, cloud, intelligence, and security together.

Watch Video

CE603 Simplifying IoT: From edge to cloud computing

Everything new in Microsoft IoT - get started with out-of-the-box SaaS and PaaS solutions, use advanced analytics to harness insights, and localize intelligence to your edge devices. Leverage our newest IoT offerings to grow your cloud business.

Watch Video

CE605 Real-world IoT projects in action: From pilot to production

The easy part of an IoT project is convincing your customer to do a pilot. Now what? We dissect several successful IoT implementations and deep dive on how to actually build IoT solutions for your customers. Join this session to learn from the successes of other IoT partners.

Watch Video

ISV09p How ISV applications can become part of device-led IoT solutions

An ever-increasing number of partners need to come together to deliver an IoT solution to an organization. Attend this session to understand how ISVs play a crucial part to bring device, cloud, intelligence, and security together.

Watch Video

IoT02 Transformation from device business to... your business "as a service"

Learn about the opportunities to transform, expand, and grow your device business through service-oriented IoT offerings, leveraging the possibilities of Microsoft cloud services.

Watch Video

CE606 Accelerating partner value creation with IoT SaaS

Microsoft IoT Central, our new SaaS offering, helps deliver digital transformations quickly for your customers and create new revenue streams for your business with value added services. Hear from an IoT partner on  their successful journey on enabling digital transformation for their customers.

Watch Video

IoT03 How to do a device-to-cloud IoT project

Defining if an IoT project is viable and ultimately successful is not always easy. Success in a project starts with setting the right goals and building off what you already have. Join us as we have an open dialogue using Microsoft's internal IoT projects as examples.

Watch Video

IoT04 How to educate your workforce on IoT

The IoT technology landscape is changing quickly around and inside organizations. Having clear development paths for employees for both personal and organizational creativity and growth, can be a crucial key for a successful device-to-cloud

Watch Video

CE602p Partner presentation: Building a booming IoT practice from the ground up

Hear from a Microsoft Partner of the Year winner on how they have successfully built a booming IoT cloud practice and won over customers. Learn what it takes to make IoT profitable for you.

Watch Video

 

 

Viewing all 36188 articles
Browse latest View live