Quantcast
Channel: TechNet Blogs
Viewing all 36188 articles
Browse latest View live

PowerShell jest Open Source i jest dostępny na systemach Linux

$
0
0

Kontynuując nasze zobowiązanie otwartej i elastycznej platformy, która spełnia potrzeby naszych klientów, otwieramy źródła PowerShell na licencji MIT oraz udostępniła technologię na systemy Linux. Przy dzisiejszych wieloplatformowych środowiskach IT, ważne jest, aby zaoferować klientom możliwość wykorzystania tych samych skryptów i umiejętności znanych ze środowisk Windows Server dla systemów Linux. Daje to możliwość spójnego zarządzania serwerami Linux i Windows za pomocą możliwości automatyzacji Microsoft Azure jak i Operations Management Suite (OMS).

Co to oznacza dla aktualnych klientów?

Obecni klienci używający mechanizmów zarządzania Azure za pośrednictwem OMS będą mogli skorzystać z technologii PowerShell nie tylko do zarządzania serwerami Windows Server ale również Linux.

Dotychczasowi użytkownicy PowerShell mogą teraz zarządzać zarówno Windows Server i Linux z dowolnego systemu klienckiego z MacOS, Linux lub Windows.

Jestem użytkownikiem Linux. Co to oznacza dla mnie?

Dla użytkowników Linux, PowerShell zapewni bogatą interaktywną powłokę, heterogeniczny framework do zarządzania / automatyzacji, który bardzo dobrze działa z istniejącymi narzędziami oraz jest zoptymalizowany do pracy ze strukturami danych (np. JSON, CSV, XML, itd.), REST API i modelami obiektowymi. W celu nauki zobacz stronę domową PowerShell oraz ścieżkę nauki PowerShell.

Jak OMS oraz PowerShell są powiązane?

OMS daje wgląd, kontrolę aplikacji oraz obciążenia na chmurze Microsoft Azure jak i innych chmur. Pomaga bardzo w transformacji do chmury środowisk Linux oraz Windows Server.

PowerShell zapewnia heterogeniczny framework do automatyzacji i zarządzania, który przyspiesza zadania administracyjne dla systemów Windows Server oraz Linux.

OMS oferuje PowerShell jako usługę. OMS Automation udostępnia PowerShell oraz Desired State Configuration (DSC) jako wysoko dostępną oraz skalowalną usługę zarządzania z chmury Microsoft Azure. Może tworzyć graficznie oraz zarządzać wszystkimi zasobami PowerShell takimi jak Runbooks, konfiguracje DSC oraz węzły DSC z jednego miejsca. Dzięki użyciu OMS Hybrid Worker dodatkowo możesz rozszerzyć możliwości automatyzacji, monitorowania, konfiguracji do środowisk nie tylko chmurowych, ale również lokalnych działających w Twoim centrum przetwarzania danych.

Jakie są różnice pomiędzy Windows PowerShell a PowerShell Core?

Windows PowerShell jest edycją PowerShell zbudowaną na .NET Framework i jest dostępna tylko na systemach Windows oraz Windows Server. PowerShell Core natomiast jest edycją zbudowaną na .NET Core i jest dostępna na systemach Windows, Windows Server, Nano Server, MacOS oraz Linux.

Na jakich systemach będzie działał PowerShell?

Windows PowerShell wspierany jest na systemach:

  • klienckich od Windows 7 do Windows 10 Anniversary Edition
  • systemach serwerowych od Windows Server 2008 R2 do Windows Server 2016

PowerShell Core może być używany na systemach:

  • od Windows 8.1 do Windows 10 Anniversary Edition
  • Windows Server 2012 R2 do Windows Server 2016 (w tym Nano Server)
  • OS X 10.11
  • Ubuntu 14.04 oraz 16.04
  • CentOS 7
  • Red Hat 7

Gdzie znajdę źródła PowerShell oraz przykłady multi-platformowego użycia?

Więcej we wpisie Jeffrey Snover-a na blogu Azure: PowerShell is open sourced and is available on Linux


Installer Azure CLI sur Windows 10 1607

$
0
0

Maintenant que nous avons Windows 10 1607, profitons-en. Si l’on regarde les documentations concernant Azure CLI, on ne manque pas de méthodes pour l’installer assez simplement sur Linux et Mac. Par exemple sur Mac j’utilise l’image Docker microsoft/azure-cli, avec Docker for Mac. Sur Linux, j’ai simplement utilisé npm pour installer Azure CLI.

Mais sur Windows ? Avec Windows 10 1607, il suffit d’installer le shell Bash (Windows Subsystem for Linux) comme indiqué dans cet article (écrit en avril mais encore valable sur Windows 10 1607 final.)

Ensuite, il suffit de lancer Bash et d’installer Azure CLI comme sur Ubuntu avec ces commandes :

sudo apt-get install nodejs-legacy
sudo apt-get install npm
sudo npm install azure-cli -g

Il ne reste plus qu’à tester la commande azure, de s’authentifier avec azure login, et c’est terminé.

Voilà au moins un usage de Bash dans Windows pour les admins !

 

 

 

 

Surface Ethernet Drivers

$
0
0

Hi,

My name is Scott McArthur and I am a Supportability Program Manager for Surface. Today I have a quick blog on some important deployment information regarding the Surface Ethernet Drivers. When doing a deployment, it is necessary to add the Surface Ethernet drivers to boot images (ConfigMgr, MDT, WDS). This blog will discuss where to get the drivers.

First some technical details on the drivers themselves:

  • PNPID: VID_045E&PID_07C6. This is partial PNPID
  • Windows 10 X86
    • msux86w10.inf
    • msux86w10.sys
  • Windows 10 X64
    • msux64w10.inf
    • msux64w10.sys
  • Windows 8.1 X86
    • msu30x86w8.inf
    • msu30x86w8.sys
  • Windows 8.1 X64
    • msu30x64w8.inf
    • msu30x64w8.sys

Where to get the driver:

  • Starting with Windows 10 Version 1607 the Surface Ethernet driver is included so if you are deploying it should just work. Note the inbox version is 10.2.504.2016. A later version has already been released.
  • The .MSI package which is used to update existing installs of Surface contains the Surface Ethernet Drivers.
  • The .ZIP package used for bare metal deployments (For example Surface Pro 4) do not contain the Surface Ethernet drivers.

To download the latest drivers for Surface Ethernet adapter, do the following:

  1. Browse to the Microsoft Update Catalog here.
  2. In the search box enter Surface Ethernet Drivers.
When looking at the drivers pay special attention to the version column. At the time of this blog publication the latest drivers were:
  • Windows 10: 10.2.704.2016
  • Windows 8.1: 8.18.303.2015

Picture1

Notes:

  • Check back regularly for any new versions.
  • There are 2 entries. One for x86 and one for x64. x86 is primarily for customers who have purchased the Surface Ethernet Adapter and using it on another device although you may be booting X86 based boot image but installing X64 operating system. Best practice is to include both in your deployments.
  • The drivers support the older 10/100 adapter (model 1552), newer USB 3.0 1GB adapter (model 1663), and Surface docks

Hope this helps with your deployments.

What’s New In Windows Server 2016 Standard Edition Part 2 – Identity

$
0
0

In the first post of this series I highlighted that with Windows Server 2016 there are some feature differences between the Standard and the Enterprise Editions that might get lost in some of the messaging, so in this series of posts I’m going to be highlighting the feature set of Windows Server 2016, and will include information from a few different resources, but the primary one is the Windows Server 2016 Technical Preview 5 Feature Comparison, along with some other useful links. As mentioned in the first post of the series, these will focus on what’s new from a Windows Server 2012 R2 perspective, rather than Windows Server 2008 R2 or Windows Server 2012 perspective, I will focus on those later if needed.

Today’s topic is identity, and following you will find the information from the Feature Comparison Guide, including some links to additional resources.

Identity

Identity is the new control plane to secure access to on-premises and cloud resources. It centralizes your ability to control user and administrative privileges, both of which are very important when it comes to protecting your data and applications from malicious attack. At the same time, our users are more mobile than ever, and need access to computing resources from anywhere.

Active Directory Domain Services (AD DS)

Active Directory Domain Services (AD DS) stores directory data and manages communication between users and domains, including user logon processes, authentication, and directory searches. An Active Directory domain controller is a server that is running AD DS.

New Domain Services Capabilities

New in Windows Server 2016:
• Privileged Access Management. This capability, which allows organizations to provide time-limited access to administrator accounts, will be covered in more detail in a later post in this series.
• Azure Active Directory Join. There are enhanced identity experiences when devices are joined to Azure Active Directory. These include applying Modern settings to corporate-owned workstations, such as access to the Windows Store with corporate credentials, live tile and notification settings roaming, and backup/restore.
For more information, see Windows 10 for the enterprise: Ways to use devices for work.
• Microsoft Passport. Active Directory Domain Services now supports desktop login from Windows 10 domain joined devices with Microsoft Passport.  Microsoft Passport offers stronger authentication than password authentication with device specific and TPM protected credentials. For more information, see, Authenticating identities without passwords through Microsoft Passport.

Active Directory Federation Services (AD FS)

AD FS is a standards-based service that allows the secure sharing of identity information between trusted business partners (known as a federation) across an extranet. Active Directory Federation Services (AD FS) builds on the extensive AD FS capabilities available in the Windows Server 2012 R2 timeframe. Key enhancements to AD FS in Windows Server 2016, including better signon experiences, smoother upgrade and management processes, conditional access, and a wider array of strong authentication options, are described in the topics that follow.

Better Sign-On to Azure AD and Office 365

One of the most common usage scenarios for AD FS continues to be providing sign-on to Office 365 and other Azure AD based applications using your on-premises Active Directory credentials.

AD FS extends hybrid identity by providing support for authentication based on any LDAP v3 compliant directory, not just Active Directory. This allows you to enable sign in to AD FS resources from:
• Any LDAP v3 compliant directory including AD LDS and third party directories. •  Un-trusted or partially trusted Active Directory domains and forests.
Support for LDAP v3 directories is done by modeling each LDAP directory as a ‘local’ claims provider trust. This enables the following admin capabilities:
• Restrict the scope of the directory based on OU.
• Map individual attributes to AD FS claims, including login ID.
• Map login suffixes to individual LDAP directories.
• Augment claims for users after authentication by modifying claim rules.
For more see Configure AD FS to authenticate users stored in LDAP directories

Improved Sign-On Experience

AD FS now allows for customization of the sign-on experience. This is especially applicable to organizations that host applications for a number of different customers or brands. With Windows Server 2016, you can customize not only the messages, but images, logo and web theme per application. Additionally, you can create new, custom web themes and apply these per relying party.

Users on Windows 10 devices and computers will be able to access applications without having to provide additional credentials, just based on their desktop login, even over the extranet.

Strong Authentication Options

AD FS in Windows Server 2016 provides more ways to authenticate different types of identities and devices. In addition to the traditional Active Directory based logon options (and new LDAP directory support), you can now configure device authentication or Azure MFA as either primary or secondary authentication methods.

Using either the device or Azure Multi-Factor Authentication (MFA) methods, you can create a way for managed, compliant, or domain joined devices to authenticate without the need to supply a password, even from the extranet. In addition to seamless single sign-on based on desktop login, Windows 10 users can sign-on to AD FS applications based on Microsoft Passport credentials, for a more secure and seamless way of authenticating both users and devices.

Simpler Upgrade, Deployment, and Management

Previously, migrating to a new version of AD FS required exporting configuration from the old farm and importing to a brand new, parallel farm. Now, moving from AD FS on Windows Server 2012 R2 to AD FS on Windows Server 2016 has gotten much easier. The migration can occur like this:
• Add a new Windows Server 2016 server to a Windows Server 2012 R2 farm, and the farm will act at the Windows Server 2012 R2 farm behavior level, so it looks and behaves just like a Windows Server 2012 R2 farm.
• Add new Windows Server 2016 servers to the farm, verify the functionality and remove the older servers from the load balancer.
• Once all farm nodes are running Windows Server 2016, you are ready to upgrade the farm behavior level to 2016 and begin using the new features.

Previously custom AD FS policies have been configured in claim rules language, making it difficult to implement and maintain more complex policies. Now, AD FS in Windows Server 2016, policies are easier to configure with wizard-based management that allows you to avoid writing claim rules even for conditional access policies. The new access control policy templates enable the following new scenarios and benefits:
• Templates to simplify applying similar policies across multiple applications.
• Parameterized policies to support assigning different values for access control (e.g. Security Group).
• Simpler UI with additional support for many new conditions.
• Conditional Predicates (Security groups, networks, device trust level, require MFA).

AD FS for Windows Server 2016 introduces the ability to have separation between server administrators and AD FS service administrators. This means that there is no longer a requirement for the AD FS administrator to be a local server administrator.

In AD FS for Windows Server 2016, it is much easier to consume and manage audit data. The number of audits has been reduced from an average of 80 per logon to 3, and the new audits have been schematized.

In AD FS on Windows Server 2012 R2, certificate authentication could not be done over port 443. This is because you could not have different bindings for device authentication and user certificate authentication on the same host. In Windows Server 2016 this has changed. You can now configure user certificate authentication on standard port 443.

Conditional Access

AD FS in Windows Server 2016 builds on our previous device registration capabilities by enabling new scenarios, working with Azure AD, to require compliant devices and either restrict or require multiple factors of authentication, based on management or compliance status.

Azure AD and Intune based conditional access policies enable scenarios and benefits such as:
• Enable Access only from devices that are managed and/or compliant.
• Restrict access to corporate ‘joined’ PC’s (including managed devices and domain joined PC’s).
• Require multi factor authentication for computers that are not domain joined and devices that are not compliant.

AD FS in Windows Server 2016 can consume the computer or device compliance status, so that you can apply the same policies to your on-premises resources as you do for the cloud.

Compliance is re-evaluated when device attributes change, so that you can always ensure policies are being enforced.

Seamless Sign-On from Windows 10 and Microsoft Passport

Domain Join in Windows 10 has been enhanced to provide integration with Azure AD, as well as stronger and more seamless Microsoft Passport based authentication. This provides the following benefits after being connected to Azure AD:
• SSO (single-sign-on) to Azure AD resources from anywhere.
• Strong authentication and convenient sign-in with Microsoft Passport and Windows Hello.

AD FS in Windows Server 2016 provides the ability to extend the above benefits and device policies to on-premises resources protected by AD FS.

Developer Focus

AD FS for Windows Server 2016 builds upon the Oauth protocol support that was introduced in Windows Server 2012 R2, to enable the most current and industry standard-based authentication flows among web apps, web APIs, browser and native client-based apps.

Windows Server 2012 R2 offered support for the Oauth authorization grant flow and authorization code grant type, for public clients only.

In Windows Server 2016, the following additional protocols and features are supported:
• OpenId Connect support.

• Additional Oauth authorization code grant types.
– Implicit flow (for single page applications).
– Resource Owner password (for scripting apps).

• Oauth confidential clients (clients capable of maintaining their own secret, such as app or service running on web server)

• Oauth confidential client authentication methods:  o Symmetric (shared secret / password).  o Asymmetric keys.
– Windows Integrated Authentication (WIA).

• Support for “on behalf of” flows as an extension to basic Oauth support.

Registering modern applications has also become simpler using AD FS in Windows Server 2016. Now instead of using PowerShell to create a client object, modeling the web API as an RP, and creating all of the authorization rules, you can use the new Application Group wizard.

Web Application Proxy

The Web Application Proxy is a Windows Server service that allows for secure publishing of internal resources to users on the Internet.

Web Application Proxy

Web Application Proxy supports new features including pre-authentication support with AD FS for HTTP Basic applications such as Exchange Active Sync. Additionally, certificate authentication is now supported. The following new features build on the existing application publishing capabilities found in the Web Application Proxy in Windows Server 2012 R2:
• Pre-authentication for HTTP basic application publishing: HTTP Basic is the authorization protocol used by many protocols, including ActiveSync, to connect rich clients, including smartphones, with your Exchange mailbox. Web Application Proxy traditionally interacts with AD FS using redirections which is not supported on ActiveSync clients. This new version of Web Application Proxy provides support to publish an app using HTTP basic by enabling the HTTP app to receive a non-claims relying party trust for the application to the Federation Service. For more information on HTTP basic publishing, see Publishing Applications using AD FS Pre-authentication
• Wildcard Domain publishing of applications: To support scenarios such as SharePoint 2013, the external URL for the application can now include a wildcard to enable you to publish multiple applications from within a specific domain, for example, https://*.sp-apps.contoso.com. This will simplify publishing of SharePoint apps.
• HTTP to HTTPS redirection: In order to make sure your users can access your app, even if they neglect to type HTTPS in the URL, Web Application Proxy now supports HTTP to HTTPS redirection.
• Publishing of Remote Desktop Gateway Apps: For more information on RDG in Web Application Proxy, see Publishing Applications with SharePoint, Exchange and RDG
• New debug log: for better troubleshooting and improved service log for complete audit trail and improved error handling. For more information on troubleshooting, see Troubleshooting Web Application Proxy
• Administration Console UI improvements
• Propagation of client IP address to backend applications

 

Council Spotlight: TechNet & MSDN Gurus, step up and be known!

$
0
0

August Gurus step up and show us your knowledge on the latest and the greatest technologies Microsoft has to offer!

And for your efforts, eminent leaders in your technology will evaluate your contributions and award real virtual medals!

All you have to do is add an article to TechNet Wiki from your own specialist field. Something that fits into one of the categories listed on the submissions page. Copy in your own blog posts, a forum solution, a white paper, or just something you had to solve for your own day’s work today.

Drop us some nifty knowledge, or superb snippets, and become MICROSOFT TECHNOLOGY GURU OF THE MONTH!

This is an official Microsoft TechNet recognition, where people such as yourselves can truly get noticed!

HOW TO WIN

1) Please copy over your Microsoft technical solutions and revelations to TechNet Wiki.

2) Add a link to it on THIS WIKI COMPETITION PAGE (so we know you’ve contributed)

3) Every month, we will highlight your contributions, and select a “Guru of the Month” in each technology.

If you win, we will sing your praises in blogs and forums, similar to the weekly contributor awards. Once “on our radar” and making your mark, you will probably be interviewed for your greatness, and maybe eventually even invited into other inner TechNet/MSDN circles!

Winning this award in your favoured technology will help us learn the active members in each community.

 

Wiki, Wiki, Wiki, Wiki,

  • Peter Laker, and
  • Ninja Ed

Новые облачные планы Project

$
0
0

Мы хотели упорядочить наши облачные подписки и сделать их максимально понятными для наших заказчиков, и поэтому с августа этого года мы обновили облачную линейку Project.

Теперь вам доступны следующие планы:

  • Project Online Professional – предназначен для систематизации и упорядочивания проектов, ресурсов и рабочих групп, эффективного составления планов проектов, отслеживания их состояния. Поддерживает удаленную совместную работу.
  • Project Online Premium – предназначен для эффективного управления портфелями портфелями проектов, программами и ресурсами.
  • Project Online Essentials – это облегченная версия приложения Project, которая позволяет участникам рабочих групп легко управлять задачами, отправлять расписания и взаимодействовать с коллегами. Project Online Essentials — дополнительный компонент для участников рабочих групп, предназначенный для клиентов, у которых есть Project Online Professional или Project Online Premium.

Эти планы доступны во всех каналах и программах, включая государственные учреждения и образование.

FeatureImage_Device-Online_800x514

Как эти изменения затронут текущих и новых клиентов?

Начиная с первого августа, новые заказчики смогут подписаться только на новые планы.

Для существующих пользовательских соглашений не произойдет никаких изменений в течение времени соглашения. После продления действующего соглашения, заказчики смогут перейти на новые планы.

  • Для соглашений, истекающих до 31 декабря 2016 года, существует возможность обновить или сохранить текущий план на еще один срок.
  • Для соглашений, истекающих после 31 декабря 2016 года, требуется перейти на новый план во время продления.

Дальнейшие действия

Если в настоящее время вы подписаны на один из облачных планов Project, который запланирован к обновлению, вы сможете перейти на новый план по расписанию вашего соглашения о дате продления. В таблице ниже продемонстрированы возможности перехода от текущих планов к новым планам.

Текущий план Во время действия вашего текущего соглашения Во время продления (если ваше соглашение истекает до 31 декабря 2016) Во время продления (если ваше соглашение истекает после 31 декабря 2016)
Project Lite Не требуется никаких действий

 

Ваша текущая лицензия Project Lite автоматически будет переименована в  Project Online Essentials

Обновление до Project Online Essentials Обновление до Project Online Essentials
Project Pro для Office 365

 

Обратите внимание, что этот план переименован в Project для Office 365

Не требуется никаких действий

 

Возможность обновления до текущего или нового плана (Project Online Professional) Обновление до Project Online Professional
Project Online Не требуется никаких действий

 

Ваша текущая лицензия Project Online автоматически будет переименована в Project Online Premium without Project Client

Возможность обновления до текущего или нового плана (Project Online Premium) Обновление до Project Online Premium
Project Online с Project Pro для Office 365 Не требуется никаких действий

 

Возможность обновления до текущего или нового плана (Project Online Premium или Project Online Professional) Обновление до Project Online Professional или Project Online Premium

 

Как только будет выбран новый план и создана новая подписка, каждый пользователь тенанта получит лицензию на новую подписку. Администраторы тенанта смогут переназначить лицензии своим пользователям вручную или массово с помощью PowerShell.

  • Для переназначения вручную, прочите статью.
  • Для массового переназначения с помощью PowerShell, прочтите статью (на англ. языке).

Вопросы и ответы

Новый план Project Online Professional – это просто переименованный план Project Pro for Office 365  с увеличенной стоимостью?

Нет, новый план Project Online Professional включает возможности совместной работы над проектом, для которых раньше нужно было отдельно приобретать Project Online. Поэтому мы добавили существенно новое значение, которое дает возможность пользователям публиковать свои планы проектов в облаке и работать над ними с коллегами. Кроме того, план Project Online Professional включает веб-интерфейс для менеджеров проектов, что дает им возможность работать над планами проектов с любого компьютера.

А что если пользователям нужен доступ только к функциональности управления портфелем проектов в Интернете и им не требуется настольный клиент Project? Существует ли более дешевая опция, чем план Project Online Premium?

План Project Online Premium – это наилучший план для таких пользователей. Тем не менее, для заказчиков EA появится более дешевая версия плана Project Online Premium без клиентского приложения Project. Фактически это текущий план Project Online, который будет переименован.

Существует ли риск потери контента во время перехода?

Нет. Пользователи не потеряют контент при переходе на новый план. У некоторых пользователей может появится ограничение на доступ к контенту в связи с назначением новой лицензии, однако администраторы могут помочь назначить соответствующие пользовательские лицензии.

Будут ли какие-либо потери в функциональности?

Нет, изменения касаются только смены лицензий.

Какие планы/лицензии требуются администраторам Project Online?

Администраторам требуется план Project Online Professional или Project Online Premium.

Где можно найти дополнительную информацию о возможностях продукта в каждом из планов?

Страница с описанием сервиса Project Online содержит более детальное описание возможностей продукта.

ランサムウェアに感染した時の正しい対処法をご存知ですか? 【8/19 更新】

$
0
0

 

最近、ランサムウェア  (身代金要求型不正プログラム) の被害が広がっています。皆様は、自分の PC がランサムウェアに感染したかもしれない時の正しい対処法をご存知でしょうか。対処法を間違えると、自分の PC がロックされてしまったり、ファイルが暗号化されてしまい、大事なデータを紛失してしまいかねません。

ランサムウェア観戦時の対処法について、日本マイクロソフトのセキュリティチームが考察していますので、ぜひ参考にしてください。

 

▼ ランサムウェア感染時の対処についての考察

 

 

 

ARM Templates für Azure Deutschland verwenden

$
0
0

Mit dem Azure Resource Manager, kurz ARM, lassen sich auch komplexe Deployments in Textdateien (im JSON-Format) speichern. Diese sogenannten Templates lassen sich dann einfach wiederverwenden. Im Internet sind viele Templates zu finden, die einem ein ganzes Stück Arbeit abnehmen, da man nur noch leichte Anpassungen vornehmen muss. Zum Beispiel finden sich jede Menge Templates auf GitHub unter https://github.com/Azure/azure-quickstart-templates. In der globalen Azure Cloud sind diese Templates direkt lauffähig, aber was muss man beachten, wenn man die Templates für die Microsoft Cloud Deutschland verwenden möchte?

Um das zu verdeutlichen, versuchen wir uns mal an zwei Templates, nämlich dem ganz einfachen Deployment eines Windows Servers, und an einem Minecraft-Server unter Ubuntu. Beide sind als Templates unter dem GitHub-Link zu finden, einmal als 101-vm-simple-windows und einmal als minecraft-on-ubuntu.

Auf den GitHub-Seiten findet sich jeweils ein “Deploy to Azure” Button. Schaut man sich an, wohin der Link geht, wird schnell klar, dass das nichts wird, der Link ist für die globale Azure Cloud ausgelegt (und verweist auf das globale Portal unter azure.com). Daher müssen wir uns die Templates erst herunterladen, bei manchen müssen wir sowieso Anpassungen vornehmen. Also dann, fangen wir mit dem Windows-Server mal an.

101-vm-simple-windows

Auf den GitHub-Seiten sehen wir mehrere Dateien, interessant für uns sind hierbei azuredeploy.json und azuredeploy.parameters.json. Ich klicke immer auf die Dateien, dann auf den “Raw”-Button, und dann markiere ich alles und kopiere es in einen Texteditor. Geht bestimmt auch einfacher, jeder wie er will… Beide Dateien speichern wir mal irgendwo lokal.

Fangen wir mit dem einfacheren Teil an, die Parameter-Datei. Öffnet man sie mit einem Editor, dann sind da drei Einstellungen zu machen, Admin-Name, Passwort und DNS-Name:

[code highlight=”6,9,12″]
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"adminUsername": {
"value": "admin123"
},
"adminPassword": {
"value": "GEN-PASSWORD"
},
"dnsLabelPrefix": {
"value": "GEN-UNIQUE"
}
}
}
[/code]

Wir füllen die Zeilen mit “value” (oben farblich markiert) mit sinnvollen Werten und speichern die Datei wieder ab. Das war es jetzt auch schon. Ganz ehrlich. Das 101-Template auf GitHub wurde schon so modifiziert, dass es für den Einsatz in beliebigen Cloud-Umgebungen geeignet ist. Wir probieren das mal aus und starten eine PowerShell mit installierten Azure Cmdlets und loggen uns ein:

[code language=”powershell” gutter=”false”]
Login-AzureRMAccount -Environment AzureGermanCloud
[/code]

Für all diejenige, bei denen das Environment AzureGermanCloud nicht existiert: Mal eine neuere Version der PowerShell Azure Cmdlets installieren…

Wir legen als erstes eine neue Ressourcengruppe an, zur Auswahl stehen als Regionen germanycentral oder germanynortheast:

[code language=”powershell” gutter=”false”]
New-AzureRmResourceGroup -ResourceGroupName ralf101new -Location germanynortheast
[/code]

Der Name der Ressourcengruppe ist natürlich beliebig. Sobald die Gruppe angelegt ist, steht unserem Deployment nichts mehr im Wege. Wir wechseln in das Verzeichnis mit unseren JSON-Dateien von oben und starten das Deployment (das wir einfach auch mal “ralf101new” nennen):

[code language=”powershell” gutter=”false”]
New-AzureRmResourceGroupDeployment -Name ralf101new -ResourceGroupName ralf101new -TemplateFile .azuredeploy.json -TemplateParameterFile .azuredeploy.parameters.json
[/code]

Und nach einigen Minuten meldet uns Azure das erfolgreiche Deployment. Das war einfach, oder? Etwas schwerer wird es dann schon bei unserem zweiten Versuch.

minecraft-on-ubuntu

Auch hier gehen wir erst mal wie oben vor uns laden die beiden Dateien herunter, am besten in ein neues Verzeichnis, und editieren wieder die Parameter-Datei (die Bedeutung der Parameter sollte klar sein…). Auf die dritte Datei kommen wir später zurück.

In diesem Fall brauchen wir das Deployment gar nicht erst zu starten, es wird schiefgehen. Warum? Nun, das Problem liegt (unter anderem) in Zeile 305:
[code gutter=”false”]
"uri": "[concat(‘http://’,variables(‘newStorageAccountName’),’.blob.core.windows.net/’,variables(‘vmStorageAccountContainerName’),’/’,variables(‘OSDiskName’),’.vhd’)]"
[/code]

Dies ist ein beliebtes Verfahren, um die URI für den weiter oben erzeugten Storage zu erzeugen. Aber leider taugt die Endung “blob.core.windows.net” nicht für Azure Deutschland. Da müsste ein “blob.core.cloudapi.de” stehen. Das ist die einzige Anpassung, die nötig ist. Zugegeben, damit wird das Template unbrauchbar für die Globale Azure Cloud, aber das soll Thema eines weitere Blog Artikels sein. Also Ändern und speichern. Und schon geht’s wieder los – Gruppe erstellen und Deployment starten:
[code language=”powershell” gutter=”false”]
New-AzureRmResourceGroup -ResourceGroupName ralfmc01 -Location germanynortheast
New-AzureRmResourceGroupDeployment -Name ralfmc01 -ResourceGroupName ralfmc01 -TemplateFile .azuredeploy.json -TemplateParameterFile .azuredeploy.parameters.json
[/code]

Übrigens ruft das Deployment noch einen Script auf, das ist diese dritte Datei, die da rumliegt. Dieser Script installiert dann noch den Minecraft-Server. Wer möchte, kann den Server anschließend als Multiplayer Server konfigurieren. Die IP-Adresse bzw. der Servername finde sich entweder über das Portal, oder – wenn wir schon mal hier in PowerShell sind – mittels:
[code language=”powershell” gutter=”false”]
Get-AzureRmPublicIpAddress -ResourceGroupName mcralf01
[/code]

Zusammenfassung

Viele der Templates lassen sich auf die gezeigte Weise anpassen, indem nach fest kodierten Strings gesucht wird und diese ggf. auf die Werte für Azure Deutschland angepasst werden. Die wichtigsten Endpunkte finden sich in diesem Blogartikel.

Für alle, die ihre Templates universell einsetzbar machen möchten, ist der nächste Blogartikel schon in Vorbereitung…

Viel Spaß!


Using Regular Expressions and Event Viewer with PowerShell

Error accessing Public Folders Hierarchy in coexistence environment 2007-2013

$
0
0

This is a specific issue we have encountered in several deployments, for customers migrating from Exchange 2007 to Exchange 2013, or simply having the two versions coexisting in their environment due specific requirements.

When you attempt to manage your legacy Public Folders from the Public Folder Management console in this scenario, you can connect to the PF server, but the PF hierarchy might not be visible under Default Public Folders node.  If you use Exchange Management Shell instead, Get-Publicfolder or Get-Publicfolder -Recurse  would return the following error:

Get-PublicFolder : There is no existing PublicFolder that matches the
 following Identity: ''. Please make sure that you specified the correct
 PublicFolder Identity and that you have the necessary permissions to view
 PublicFolder.
 At line:1 char:1
 + Get-PublicFolder
 + ~~~~~~~~~~~~~~~~
 + CategoryInfo          : NotSpecified: (0:Int32) [Get-PublicFolder], Mapi
 OperationException
 + FullyQualifiedErrorId : C00ECE14,Microsoft.Exchange.Management.MapiTasks
 .GetPublicFolder

This issue does not prevent Outlook clients to access public folders.

When Exchange 2007 sets up the mapi session to access the public folder database and read the PF hierarchy, it builds the system attendant mailbox DN based on the server’s DN that hosts the oldest (first created) private MDB object. Assuming that the oldest database in the organization was created in Exchange 2007, everything should work fine. However if Exchange 2013 joins to the organization andlater, the Exchange 2007 databases are recreated, it is possible that an Exchange 2013 private MDB becomes the oldest database in the environment, causing the issue described above.

The problem is that due to architectural changes in Exchange 2013, the server’s system attendant object is no longer populated with the homeMDB attribute used by Exchange 2007 to create the mapi session (remember Databases no longer belong to servers after Exchange 2010, but to Administrative Groups).

A quick workaround to this issue, is to set the homeMDB attribute of the Microsoft System Attendant object under the Exchange 2013 server hosting the oldest database in the environment. If this database is part of a DAG with copies in multiple servers, each system attendant object under each server should be modified. You will need to perform this change using a low level directory service tools like ADSIEdit, and as such, this change is unsupported in nature, please proceed at your own risk.

How do we know how old is a database object? just by inspecting its whenCreated attribute in AD. Get-MailboxServer | Get-MailboxDatabase | Sort-Object whencreated| fl name , server, whencreated, will give you a list of mailbox database objects in your environment ordered by creation date.

Which homeMDB name should we use for the property value? You can select the DN of one of the databases hosted on the server being modified. In the example below, assuming the oldest database is active on server EXHR-2006, you will be populating the homeMDB attribute of the “Microsoft System Attendant” object with its value equals to the distinguishedName property of the “Mailbox Database 1450518890”.

 

adsiedit

Another, more difficult workaround, would be to re-create the conflicting Exchange 2013 databases so that their whenCreated attribute is greater than the whenCreated attribute of the oldest Exchange 2007 private database. Depending on the number of Exchange 2013 databases and how recent is your oldest 2007 database,  this workaround could be unfeasible.

 

 

ARM Templates multi-cloud fähig machen

$
0
0

Ich wusste ehrlich gesagt nicht wirklich, wie ich den Artikel nennen sollte. Daher vielleicht erst mal zur Klärung, um was es mir geht:

Im Internet findet man viele Templates für ARM Deployments, die mehr oder weniger geeignet sind, auch außerhalb der normalen globalen Azure Cloud verwendet zu werden. Daher wollte ich mal ein paar Punkte zeigen, mit denen man seine Templates flexibler verfassen kann.

Variablen und Parameter

Grundsätzlich sollte alles, was im Template mehrfach vorkommt, in Variablen definiert werden, und alles, was sich von Deployment zu Deployment ändern kann, in Parametern, letzteres vorzugsweise in einer eigenen Datei. Bei den Azure Quickstart Templates, die auf GitHub bereitgestellt werden, findet sich stets das Template an sich in der Datei azuredeploy.json und die Parameter in azuredeploy.parameters.json.

Location

Eine weitere Stelle für Flexibilität ist die Region, sprich Location. Die läßt sich bei den einzelnen Ressourcen natürlich direkt definieren, zum Beispiel mit:

[code]
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"name": "mystorage",
"apiVersion": "2016-01-01",
"location": "germanycentral",
(…)
[/code]

Möchte man eine andere Location, dann erfordert das ein Editieren an unter Umständen mehreren Stellen – man hat ja meist mehr als eine Ressource. Schritt 1 wäre also, das durch eine Variable zu ersetzen (und bei der Gelegenheit auch gleich den Storagenamen in eine Variable packen):

[code]
"variables": {
"storageAccountName": "mystorage",
"storagelocation": "germanycentral",
(…)

"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables(‘storageAccountName’)]",
"apiVersion": "2016-01-01",
"location": "[variables(‘storagelocation’)]",
(…)
[/code]

Schon besser. Es geht aber noch einfacher. Man legt die Ressource einfach in der selben Region an wie die zugehörige Ressourcengruppe und spart sich die Variable:

[code]
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables(‘storageAccountName’)]",
"apiVersion": "2016-01-01",
"location": "[resourceGroup().location]",
(…)
[/code]

Wir verwenden hier die ARM Funktion resourceGroup() und greifen direkt auf das location-Attribut zu. Damit richtet sich die Region einfach nach der Region der Ressourcegruppe.

Dynamische Werte mit concat

Oft wird mit Hilfe der Funktion concat() eine URI für Blobs erzeugt:
[code]
"osDisk": {
"name": "osdisk",
"vhd": {
"uri": "[concat(‘http://’,variables(‘storageAccountName’),’.blob.core.windows.net/’,variables(‘StorageAccountContainerName’),’/’,variables(‘OSDiskName’),’.vhd’)]"
},
(..)
[/code]

Das funktioniert zwar, aber nur, solange man sich im Environment der globalen Azure Cloud bewegt. In Azure Deutschland zum Beispiel ist der StorageEndpoint eben nicht mehr “blob.core.windows.net“, sondern “blob.core.cloudapi.de” und schon funktioniert das Template nicht mehr. Das ist übrigens die Stelle, an der viele Deployment Templates scheitern. Wie macht man das jetzt aber flexibler?

Das ist zugegebenermaßen schon etwas schwieriger, aber wir schauen uns das auch Schritt für Schritt an. Obige Zeile in universeller Schreibweise wäre wie folgt:
[code]
"osDisk": {
"name": "osdisk",
"vhd": {
"uri": "[concat(reference(resourceId(‘Microsoft.Storage/storageAccounts/’, variables(‘storageAccountName’))).primaryEndpoints.blob, variables(‘StorageAccountContainerName’), ‘/’, variables(‘OSDiskName’), ‘.vhd’)]"
},
(..)
[/code]

Puh. Also dann mal ran an die Zeile, am besten den ersten Teil von innen nach außen:
[code]
resourceId(‘Microsoft.Storage/storageAccounts/’, variables(‘storageAccountName’))
[/code]
Das beschert uns die ID des Storageaccounts, das zuvor (an anderer Stelle im Template) angelegt wurde.
[code]
reference(<der obigen ID>)
[/code]
wiederum liefert uns eine Referenz darauf und hat nebenbei den Effekt, dass wir hier implizit sagen, dass die Storage Ressource bereits fertig angelegt sein muss, bevor es hier weitergehen kann. Ersetzt also ein “dependsOn”. Wir greifen auf die Eigenschaften dieser Referenz zu:
[code]
(reference(<der obigen ID>)).primaryEndpoints.blob
[/code]
Da die StorageRessource automatisch mit dem richtigen Endpunkt angelegt wurde, lesen wir den hier jetzt einfach wieder aus. Genial, oder? Der Rest der Zeile ist simples String verketten mit concat, also den Containernamen, den Disknamen und die Endung. Fertig. Und diese Zeile funktioniert in allen Azure Environments ohne Änderung.

Achtung: die Eigenschaft “primaryEndpoints” steht in älteren API Versionen nicht zur Verfügung. Bitte darauf achten, dass die Definition der Storage-Ressource eine API-Version von “2016-01-01” oder neuer verwendet.

Nochmal Achtung: Mit API Version “2016-01-01″ hat sich auch das Schema geändert. Hier mal im Vergleich:
[code highlight=”2″]
{
"apiVersion": "2015-06-15",
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables(‘StorageAccountName’)]",
"location": "[resourceGroup().location]",
"properties": {
"accountType": "[variables(‘storageAccountType’)]"
}
},
[/code]

und das neuere Schema:

[code highlight=”2”]
{
"apiVersion": "2016-01-01",
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables(‘storageAccountName’)]",
"location": "[resourceGroup().location]",
"properties": {},
"sku": {
"name": "[variables(‘storageAccountType’)]"
},
"kind": "Storage"
},
[/code]

Aber das sollte kein Problem sein. Übrigens werden die Azure Quickstart Templates auf GitHub nach und nach alle umgestellt oder sind es schon größtenteils.

Question about Backups

$
0
0

rwagg-white small

Rob Waggoner

 

MS-Azure_rgb_Blk

Question:

Our CFO wants to know an estimate of how long each night the Azure backup will take to back up all three of our servers (SQLserver and two domain controllers)? Can I calculate the time needed?

SQLServer = 283 GBs

Domain Controller 1 = 32 GBs

Domain Controller 2 = 69 GBs

*He wants to know what time to we need to start it at night (closing is 9 p.m.) so that it is not still backing up, and slowing down operations, the next day.

 

Answer:

The short answer is probably, but I think the real concern is ensuring that the backups do not interfere with the daily business.  So here is how I would address this question. 

First, I’m assuming these are either physical servers, or Hyper-V based VMs.  If they are VMware based VMs, we can still do the backups, but there are a few things we need to discuss.

Because of the workloads being protected, I’m assuming the customer will use Azure Backup for Workloads.  With that said, the initial backup will be disk to disk and stored on a local drive, then Azure Backup for Workloads will stream the local backup to Azure for long term retention. 

We support bandwidth throttling whether using the standard Azure Backup Agent, or Azure Backup for Workloads.  You can define upload bandwidth during work hours and non-work hours (you can also define what “Work hours” means to the customer).  You can restrict the bandwidth consumed both during work hours and non-work hours.  Even if this takes multiple days to upload, it still won’t interfere with the business during work hours.  That is how I have my server configured. 

clip_image001

The partner also has the ability to ship a drive to Azure for the initial backup.  Once the initial backup is complete, only deltas will be sent to Azure after that.  Hence, we can seed the initial backup.  Here is the seeding screen:

clip_image002

If they don’t want to ship a drive:

My suggestions is that they initially backup all three workloads to just the Azure Backup for Workloads server (without online protection).  See the screen shot:

clip_image003

Then modify one backup image per day to add the online protection to Azure.  This way you can send one initial copy of your workload per evening to have a better chance of getting the initial backup to Azure in one night.

 

Until next time,

 

Rob

 

PowerShell is now open sourced and available on OS X 10.11 and Linux!

$
0
0

Yesterday, Microsoft and Jeffrey Snover announced PowerShell is now open sourced and available on Linux and OS X and is available for download here. The blog post goes into great detail the journey to where we are now with PowerShell being available on platforms other than Windows, and there’s even a Channel 9 video demoing it here. While I strongly encourage you to check out Snover’s announcement and video, I also wanted to note a few things:

1. You’re probably wondering how it works. Well, it’s powered by .NET Core, but does not require you to have .NET Core installed. The package you download will install the parts of .NET Core you need.

2. If you’re a current Bash user who is now interested on diving into PowerShell and want to know where to start, I recommend checking out the Map Book for Experienced Bash Users, which is about 70% of the way down on this page.

3. If you want to get started with setting it up on Ubuntu 14.04, 16.04, and OS X 10.11, instructions are on the GitHub link, but also below:

On Ubuntu 14.04, Ubuntu 16.04, and OS X 10.11 (From a terminal session)

cd to your home directory by typing:

cd ~

Download the PowerShell package by typing:

wget https://github.com/PowerShell/PowerShell/releases/download/v6.0.0-alpha.9/powershell_6.0.0-alpha.9-1ubuntu1.14.04.1_amd64.deb


For just Ubuntu 14.04


Install the dependencies:

sudo apt-get install libunwind8 libicu52
sudo dpkg -i powershell_6.0.0-alpha.9-1ubuntu1.14.04.1_amd64.deb


For just Ubuntu 16.04

Install the dependencies:

sudo apt-get install libunwind8 libicu55
sudo dpkg -i powershell_6.0.0-alpha.9-1ubuntu1.16.04.1_amd64.deb


For just OS X 10.11

Install the package:

sudo installer -pkg powershell-6.0.0-alpha.9.pkg -target /

Then run powershell by typing the word: powershell


Visual Studio Code Modifications

If you want to edit your Integrated Terminal settings in VS Code on OS X or Ubuntu to use new Powershell, I have included the instructions below:

From within Visual Studio Code, click File -> Preferences -> User Settings. This will open a settings.json file


OS X – Add this line:

{"terminal.integrated.shell.osx": "/usr/local/bin/powershell"
}

(Note: this was tested on OS X 10.11.6)

Ubuntu – Add this line:

{"terminal.integrated.shell.linux": "/usr/bin/powershell"
}

(Note: This was tested on 14.04)


Final Tips

Create an alias to use ‘ps’ instead of powershell

cd to home directory:

cd ~

Create or edit ./bash_profile using your favorite text editor. I used vim.

sudo vim ./bash_profile

Add the following line:

alias ps='powershell'

Type the following to refresh the bash shell environment:

source ~/.bash_profile

And there you go! Test it out, let me know what you think!

Find out if your AGPM archive needs updating

$
0
0

For those of you out there using Advanced Group Policy Management a.k.a. AGPM, I have a question: how do you know that your AGPM archive still reflects the reality in Active Directory?

Thought about it? Good. There is a thorny issue here that caused a lot of problems already. AGPM flat-out assumes that its archive is the truth and contains the most recent information about any Group Policy Object that it manages. But this assumption is not always valid. You may have changes to a GPO in AD directly, out of necessity or by accident. The most common variation is this:

  • You edit and deploy a GPO using AGPM.
  • You change the security filtering on the GPO, limiting the GPO to members in a certain security group. At this point, the AD version of the GPO differs from the one in AGPM.
  • Time passes, and after a while you edit the GPO again in AGPM.
  • After approval, you deploy to production. AGPM is leading, and overwrites whatever is there, including permissions. The custom security filtering is now removed and the GPO gets applied to the wrong set of users/computers.

You get the point. Any situation where the AD version of a GPO is more recent than the version in AGPM is a problem. The other way around is usually not an issue; this just reflects a GPO being edited.

How do you fix this problem? AGPM has a neat solution. In the APGM console (GPMC –> Change Management), you select the GPO and do right-click, Import, From Production. This will read whatever is in the production GPO right now and promotes this to the most recent version in the archive. If you now start an edit session, you will be based of the correct version. The trick to fixing the problem is, of course, in knowing which GPO is affected.

How to figure out which GPOs are more recent in AD than in AGPM is not so easy, short of checking all GPO’s in the archive. In fact, that’s exactly what I ended up doing. I wrote a Powershell script to parse the AGPM archive for all managed GPOs and to compare timestamps to the GPO version in AD. Let’s have a look at the relevant parts.

The first step is to find out where the AGPM archive is. We require that the script is run on the AGPM server itself, so we can query the local registry directly to find the archive path. Then we apply some PowerShell magic to parse the AGPM state file as XML, ready for use.

[powershell]
$archivepath = Get-ItemProperty HKLM:SOFTWAREMicrosoftAgpm -Name archivepath
$path = $archivepath.archivepath
[[xml]] $archive = get-content (Join-Path $path "gpostate.xml")
[/powershell]

Although seldom used as such, AGPM has the possibility to manage all domains in a forest so we must assume that the AGPM archive has multiple domains. This loop iterates over the domains, gets all managed GPOs, and builds an array with the domain list ($domainlist),  and a hash with GPO GUID as key and the AGPM timestamp as value ($agpmtime).

[powershell]
foreach ($gpodomain in $archive.Archive.GPODomain)
{
$domainlist += $gpodomain.domain
foreach ($gpo in $gpodomain.GPO) {
$agpmtime.Add($gpo.id, $gpo.state.time)
}
}
[/powershell]

Then we start looping over all GPOs, and the next item of interest is determining if an import of a particular GPO is needed. We take the difference between the timestamps of the AGPM Archive and the GPO in AD, corrected for daylight saving time ($dstcorrection, calculation not shown). In principle, an AGPM import of the GPO is needed if the AD timestamp is more recent. The reality is a bit more complex, as it turns out. AGPM needs some time to write the GPO to AD, and you may have time differences in the forest leading to different timestamps. So a fudge factor based on the time difference in seconds is needed, as shown below.

[powershell]
$delta = New-TimeSpan -Start $archivetime -End $adtime.AddHours(-$dstcorrection)
$needsimport = switch ($delta.TotalSeconds)
{
{ $_ -le 0 } { "no"; break }
{ $_ -le 5 } { "maybe"; break }
{ $_ -le 60 } { "probably"; break }
{ $_ -gt 60 } { "yes"; break }
}
[/powershell]

And that’s basically it. While still looping over the GPOs we dump the result into the pipeline for further processing, using the neat method of [PSCustomObject] to cast the output into a PowerShell object that is easy to handle.

Usage is simple. The script has no argument and is invoked like this:

[powershell]
Find-AGPMImportNeeded | Out-Gridview
[/powershell]

When ran in my testlab of two domains, the result looks like this.

agpm-importneeded

It shows you all the data you need to decide if an import is needed. Domain, name, timestamps, and a guesstimate for the import. This was tested using Powershell 3.0 and 4.0, and with AGPM 4.0 SP2 and SP3. You can find the full script on the TechNet Gallery: https://gallery.technet.microsoft.com/scriptcenter/Find-out-which-GPOs-in-7e798661

Серия регулярных online-встреч What’s new in Microsoft Cloud

$
0
0

Уважаемые коллеги, здравствуйте!

Приглашаю вас на регулярные онлайн-встречи “What’s new in the Microsoft cloud” для зарегистрированных партнеров Microsoft.
В новом сезоне 2016/17 гг. мы будем рассказывать о самых интересных и актуальных новинках и изменениях в облачных сервисах Microsoft. Мы будем обсуждать новости из мира Azure, Office 365 и EMS.

Встречи запланированы по вторникам в привычное время с 11 до 12 часов.
Если вы развиваете облачный бизнес, хотите получать все новости в одном месте, задать вопросы, обсудить сценарии, рассказать о себе, то приходите и вы всегда будете в курсе всех облачных изменений.

График встреч:

Microsoft Azure 6 сентября, 4 октября, 8 ноября
Office 365 23 августа, 20 сентября, 18 октября, 22 ноября, 13 декабря
EMS / Intune 30 августа, 27 сентября, 25 октября, 29 ноября

ВАЖНО! Не забудьте зарегистрироваться на каждую встречу, перейдя по ссылкам,
которые скрываются под запланированной датой.

Место проведения: Skype for Business
Язык: Русский
Стоимость: Без оплаты

А по ссылке http://aka.ms/RUPTSonline всегда доступна актуальная версия файла с анонсами основных онлайн-встреч нашей команды.

С уважением,
Евгений Артемьев
Технический консультант партнеров (PTC)


Logparser play of a forensicator

$
0
0

My guru, I won’t name him, but he knows who he is, told me one day what we do is not exactly forensics, its actually Root Cause Analysis to find out how a security incident happened, so once we know that root cause we can do multiple things from ensuring preventions to mitigations and performing recovery.

Few words about Forensics. Forensics in  true sense, is the domain, in which the evidence collected from crime scene needs to be presented to the court of law. The rules of evidence and chain of custody which are very strict, apply in forensics. In case of IT crimes, evidence is fragile and delicate ,which also includes volatile and non-volatile data or evidence, where data collection is done as per order of volatility, most volatile first. So when a security Incident happens, the processes are used by organization, which insure if organization wants to go to court of law to punish the attackers specially if they are internal. Then these processes needs to be followed to protect evidence and maintain chain of custody, preferably performed by certified and experienced forensic expert . A minor modification in the evidence during the investigation or otherwise will make evidence useless in the court of law ,allowing attackers to go free. That’s why rules of forensics are really rigid.

Since we don’t work with evidence part so we live in the domain of Root Cause Analysis not Exactly forensics so evidence protection(with all the rigid strict rules) is the main difference between RCA and forensics but these days these terms have been used interchangeably but still we should know the difference.

Which later was confirmed, when i was studying for  CISSP.One of the things my guru encouraged me to play with was a tool that he is perfection on. The tool is Logparser.

Being an inspired student, i started playing with that, like any artist who would like to showcase his little piece of art or share his work or his amazement. I am putting my little compilation, that helps me tracking things as per my needs, when i m digging deeper into files, events and logs.

I have organized them in groups , based on focus, so I have sections focused on logon events, service generation and process creation then i have section that is about playing with file system using NTFSinfo and USNinfo. To keep the size of the article smaller, i m not adding the details about NTFS info and USN info, which are top focus along with other things during forensics or RCA investigations. May be in some other post or follow up post i shall add their details. One more point, I have mentioned CSV(comma separated file) and Datagrid or grid as output. CSV file can be opened using excel for analysis and Datagrid  shows output after logparser has finished the query without help of any text analysis tool in a display format like excel columns and rows.

Event Log Analysis

You will notice huge usage of extract_token(strings, n, ‘|’) as fieldname in my log parser queries below, this is one of the most beautiful ,effective and powerful techniques to extract items within string field in the event logs here “n” is number usually location from where you want to extract and “|” is like delimiter used in grep utility along with cut in Linux ,so you can change value of n to get the item you want out of string field in the event log, you will get an idea when you will see examples below.

                                                    To track Logon success and failure attempts

 

Successful logons :  list all accounts who logged on successfully


Output in grid

logparser -i:evt “SELECT distinct extract_token(strings, 5, ‘|’) AS account FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4624” -o:DataGrid

Output in CSV

logparser -i:evt “SELECT distinct extract_token(strings, 5, ‘|’) AS account FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4624” -o:csv >C:dataexampleDIRaccounts_logon_success.csv


 

Failed logon attempts


Output in grid

logparser -i:evt “SELECT distinct extract_token(strings, 5, ‘|’) AS account FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4625” -o:DataGrid

Output in CSV

logparser -i:evt “SELECT distinct extract_token(strings, 5, ‘|’) AS account FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4625” -o:csv >C:dataexampleDIRaccounts_logon_fail.csv


 

logon type-3 : Network logon/Network Access


Output in grid

logparser -i:evt “SELECT distinct extract_token(strings, 5, ‘|’) AS account,extract_token(strings, 8, ‘|’) AS logontype FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4624 and extract_token(strings, 8, ‘|’)=’3′” -o:DataGrid

Output in CSV

logparser -i:evt “SELECT distinct extract_token(strings, 5, ‘|’) AS account,extract_token(strings, 8, ‘|’) AS logontype FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4624 and extract_token(strings, 8, ‘|’)=’3′” -o:csv >C:dataexampleDIRaccounts_logon_logontype3.csv


 

logon type 10 :RDP Access


Output in grid

logparser -i:evt “SELECT distinct extract_token(strings, 5, ‘|’) AS account,extract_token(strings, 8, ‘|’) AS logontype FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4624 and extract_token(strings, 8, ‘|’)=’10′” -o:DataGrid

Output in CSV

logparser -i:evt “SELECT distinct extract_token(strings, 5, ‘|’) AS account,extract_token(strings, 8, ‘|’) AS logontype FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4624 and extract_token(strings, 8, ‘|’)=’10′” -o:csv >C:dataexampleDIRaccounts_logon_logontype10.csv


logon type- 2 :Interactive i.e. login on the console


Output in grid

logparser -i:evt “SELECT distinct extract_token(strings, 5, ‘|’) AS account,extract_token(strings, 8, ‘|’) AS logontype FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4624 and extract_token(strings, 8, ‘|’)=’2′” -o:DataGrid

Output in CSV

logparser -i:evt “SELECT distinct extract_token(strings, 5, ‘|’) AS account,extract_token(strings, 8, ‘|’) AS logontype FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4624 and extract_token(strings, 8, ‘|’)=’2′” -o:csv >C:dataexampleDIRaccounts_logon_logontype2.csv


 

logon type- 5:  service logon


Output in grid

logparser -i:evt “SELECT distinct extract_token(strings, 5, ‘|’) AS account,extract_token(strings, 8, ‘|’) AS logontype FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4624 and extract_token(strings, 8, ‘|’)=’5′” -o:DataGrid

Output in CSV

logparser -i:evt “SELECT distinct extract_token(strings, 5, ‘|’) AS account,extract_token(strings, 8, ‘|’) AS logontype FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4624 and extract_token(strings, 8, ‘|’)=’5′” -o:csv >C:dataexampleDIRaccounts_logon_logontype5.csv


 

logon type- 3 :  Network logon (Output in grid)


With authentication method used and source computer

logparser -i:evt “SELECT distinct extract_token(strings, 5, ‘|’) AS account,extract_token(strings, 8, ‘|’) AS logontype,extract_token(strings,10, ‘|’) as auth_method,extract_token(strings, 11, ‘|’) as source_computer FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4624 and extract_token(strings, 8, ‘|’)=’3′” -o:DataGrid

With source_ip used and source computer

logparser -i:evt “SELECT distinct extract_token(strings, 5, ‘|’) AS account,extract_token(strings, 8, ‘|’) AS logontype,extract_token(strings,18, ‘|’) as source_ip,extract_token(strings, 11, ‘|’) as computer FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4624 and extract_token(strings, 8, ‘|’)=’3′” -o:DataGrid


 

No of logon attempts per account


LogParser.exe -i:evt -o:datagrid “SELECT distinct extract_token(strings, 5, ‘|’) AS account, COUNT(*) AS hits FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4624 GROUP BY account ORDER BY hits”

LogParser.exe -i:evt -o:datagrid “SELECT distinct extract_token(strings, 5, ‘|’) AS account, COUNT(*) AS hits FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4624 GROUP BY account ORDER BY hits”


 

                                                                   Service generated


LogParser.exe -i:evt -o:datagrid “SELECT Timegenerated,extract_token(strings, 0, ‘|’) AS service, extract_token(strings, 1, ‘|’) AS exe FROM ‘C:dataexampleDIRsystem.evt’ WHERE eventid=7045

   Between a time range, when you target a time line.

LogParser.exe -i:evt -o:datagrid “SELECT Timegenerated,extract_token(strings, 0, ‘|’) AS service, extract_token(strings, 1, ‘|’) AS exe FROM ‘C:dataexampleDIRsystem.evt’ WHERE eventid=7045 and Timegenerated >’2016-06-09 09:30:00′ and Timegenerated <‘2016-06-09 20:30:00’ ”


 

                                                                    Process Creation


LogParser.exe -i:evt -o:datagrid “SELECT distinct extract_token(strings, 5, ‘|’) AS exe, COUNT(*) AS hits FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4688 GROUP BY exe ORDER BY hits”

Within Time range

LogParser.exe -i:evt -o:datagrid “SELECT distinct extract_token(strings, 5, ‘|’) AS exe, COUNT(*) AS hits FROM ‘C:dataexampleDIRsecurity.evt’ WHERE EventID = 4688 and Timegenerated >’2016-06-09 09:30:00′ and Timegenerated <‘2016-06-09 20:30:00’ GROUP BY exe ORDER BY hits”


 

File Activity analysis


NTFS info and USNinfo are used to dig into file activities e.g. when a file landed on the machine, when changes to a certain files are made, when certain files were deleted , there is a fine document on SANS about file access, creation ,modification, modification of MFT data .MACE or MACB reference  https://www.sans.org/reading-room/whitepapers/forensics/filesystem-timestamps-tick-36842

                                                                      NTFS info


 

Based on filenamecreationDate

Logparser.exe -i:csv -o:datagrid “SELECT Filename AS Source, ParentName, File, FileNameCreationDate, LastModificationDate FROM ‘C:dataexampleDIRntfsinfo*.csv’ WHERE (FileNameCreationDate >= TO_TIMESTAMP(‘2016-04-22 01:00:00’, ‘yyyy-MM-dd HH:mm:ss’) AND FileNameCreationDate < TO_TIMESTAMP(‘2016-04-22 02:00:00’, ‘yyyy-MM-dd HH:mm:ss’))

Based on filenamecreationDate and lastmodificationdate

Logparser.exe -i:csv -o:datagrid “SELECT Filename AS Source, ParentName, File, FileNameCreationDate, LastModificationDate FROM ‘C:dataexampleDIRntfsinfo*.csv’ WHERE (FileNameCreationDate >= TO_TIMESTAMP(‘2016-04-22 01:00:00’, ‘yyyy-MM-dd HH:mm:ss’) AND FileNameCreationDate < TO_TIMESTAMP(‘2016-04-22 02:00:00’, ‘yyyy-MM-dd HH:mm:ss’)) OR (LastModificationDate >= TO_TIMESTAMP(‘2016-04-22 01:00:00’, ‘yyyy-MM-dd HH:mm:ss’) AND LastModificationDate < TO_TIMESTAMP(‘2016-04-22 02:00:00’, ‘yyyy-MM-dd HH:mm:ss’))”

File search

Logparser.exe -i:csv -o:datagrid “SELECT Filename AS Source, ParentName, File, FileNameCreationDate, LastModificationDate FROM ‘C:dataexampleDIRntfsinfo*.csv’ WHERE (ParentName Like ‘%malwarefileoranyfile%’)”

Or

Logparser.exe -i:csv -o:datagrid “SELECT Filename AS Source, ParentName, File, FileNameCreationDate, LastModificationDate FROM ‘C:dataexampleDIRntfsinfo*.csv’ WHERE (File Like ‘%malwarefileoranyfile%’)”

Or

If you want to know if psexec was on that machine 

Logparser.exe -i:csv -o:datagrid “SELECT Filename AS Source, ParentName, File, FileNameCreationDate, LastModificationDate FROM ‘C:dataexampleDIRntfsinfo*.csv’ WHERE (File Like ‘%psexec%’)”


 

                                           USN Info(refer: https://en.wikipedia.org/wiki/USN_Journal)


To see if certain files were deleted, various things can be tried in the conditions e.g. full path of location where you want to see deleted files

Output in grid

LogParser “SELECT * FROM ‘C:dataexampleDIRusninfo*.csv’ where (Reason LIKE ‘%create%’) and (FullPath LIKE ‘%malwarefileoranyfile%’)” -o:datagrid

Output in CSV

LogParser “SELECT * FROM ‘C:dataexampleDIRUSNInfo_C_.csv’ where (Reason LIKE ‘%delete%’)” >deleted_files.csv


Hope you guys in field of forensics or Security Incident response RCA enjoy it and please add your tricks and suggestions in comments section. this could be an open live document if you like. Thanks for reading ..

Microsoft Azure テクニカル サポート チームのブログをご紹介!【8/20更新】

App-V 5 – Virtual Shell subsystem failure 2A-00000002

$
0
0

Hi all,

I came across a really interesting error code recently and thought it was worth sharing why it occurs, the error message that’s presented to the user looks like this when launching an application.

App-V 5.0 SP3 Client – 0x9B50172A-0x2 / 0x9B50172A-0x00000002

2a-2_sp3

App-V 5.1 Client – 0x9B501A2A-0x2 / 0x9B501A2A-0x00000002

2a-2_51

You can see that the error codes are different but they are the same error code, its the changes in the client version that causes the difference. If you look in the event log you will see the following message:

App-V 5.0 SP3 Client
Process 4564 failed to start due to Virtual Shell subsystem failure. Package ID {a3ee2888-fda4-43b8-8ca5-7c2b58a26893}. Version ID {37bdd39e-3760-47f0-b950-2b01e6827930}. Error: 0x9B50172A-0x2

App-V 5.1 Client
Process 4260 failed to start due to Virtual Shell subsystem failure. Package ID {a3ee2888-fda4-43b8-8ca5-7c2b58a26893}. Version ID {37bdd39e-3760-47f0-b950-2b01e6827930}. Error: 0x9B501A2A-0x2

Whenever we receive these messages you need to do a debug on what the error code translates to and which debug event log to enable.

Both error codes translate to the following:

Code: 9B50172A-00000002 / 9B501A2A-00000002
Result: Error
Type: Windows
HRESULT: 0x00000002

# winerror.h selected.
# 2 matches found for "0x00000002"
# for hex 0x2 / decimal 2
  ERROR_FILE_NOT_FOUND                                           winerror.h
# The system cannot find the file specified.
# as an HRESULT: Severity: SUCCESS (0), FACILITY_NULL (0x0), Code 0x2
# for hex 0x2 / decimal 2
  ERROR_FILE_NOT_FOUND                                           winerror.h
# The system cannot find the file specified.

From a debug perspective you can enable the following logs dependent on the App-V Client version your using:

App-V 5.0 SP3 - Microsoft-AppV-Subsystems-VShell/Debug
App-V 5.1 - Microsoft-AppV-Client/Debug (ServiceLog)

In the debug event log, you will see the following message:

2a-2_sp3_EL_VS

The question now is what are the Integration points with the App-V client?

Rules of integration

When App-V applications are published to a computer with the App-V Client, some specific actions take place as described in the list below:

  • Global Publishing: Shortcuts are stored in the All Users profile location and other extension points are stored in the registry in the HKLM hive.
  • User Publishing: Shortcuts are stored in the current user account profile and other extension points are stored in the registry in the HKCU hive.

https://technet.microsoft.com/en-us/itpro/mdop/appv-v5/application-publishing-and-client-interaction

If you check the registry for the user, you will see the integration points for the packages published to the user.

Reg_Integration

There are two keys:

Integration Location: The integration location is the path in the file system to the junction point of the package using the PackageID only
Staged Location: This is the real location within the App-V cache with both the PackageID and VersionID specified

If you check the junction point in explorer you will see that the integration location is pointing to the staged location without the version GUID.

Package_Junction

To prove that this is the case you can run the following commands which will display the junction points from the Integration location:

cd %LOCALAPPDATA%MicrosoftAppVClientIntegration
dir /AL

Integration_Junction

That’s the context of what the integration point is with App-V but what causes the error?

If you check the package that’s causing the error in my case, it’s the PackageID “A3EE2888-FDA4-43B8-8CA5-7C2B58A26893” which can be seen from the debug event log.

Note: This is important the Virtual Shell subsystem failure will return the PackageID of the package, if the package is in a connection group then the PackageID and VersionID will actually be the Connection Group GroupID and VersionID so make sure your using the debug event logs for finding which package is causing the issue. You can see below the IDs have changed as the package is now in a connection group.

Process 600 failed to start due to Virtual Shell subsystem failure. Package ID {8d7a6c1a-0336-4a3f-a758-c3d7c4a2b225}. Version ID {735fa6ea-7ce8-4c85-b4f7-186fd97f26e6}. Error: 0x9B50172A-0x2

If you check the PackageID it returns the following and it shows something really interesting that the package is published globally and not to the user.

Package_Integration

The package not being published to the user is the interesting point here as if you check the registry for the user you will see that the integration point is set in the users registry hive:

Reg_Package

That’s the cause of this error code, the App-V client reads the integration point information in the registry to check where the integration location is, it then checks if the path is valid and if its not it causes the “sytem cannot find the file specified” hresult. If you delete the key “{A3EE2888-FDA4-43B8-8CA5-7C2B58A26893}” or remove the “Integration Location” and “Staged Location” entries then the application will launch successfully.

If you want to check if you have a mismatched integration point you can use the following script:

<#
.DISCLAIMER The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.
#>

########################################
# Function Test-RegistryValue
########################################

Function Test-RegistryValue($regkey, $name){

Get-ItemProperty $regkey $name -ErrorAction SilentlyContinue |
Out-Null
$?

}

########################################
# Function Check-UserRegistryIntegrationPackages
########################################

Function Check-UserRegistryIntegrationPackages(){

$User = "CurrentUser"
$Integration = "SOFTWAREMicrosoftAppVClientIntegrationPackages"

# Opening Current User Registry
$registry = [microsoft.win32.registrykey]::OpenRemoteBaseKey("$User","localhost")

$i_CAVURIP = 1
$Results_CAVURIP = @()

$Packages = $registry.OpenSubKey($Integration)

$SubKeys = $Packages.GetSubKeyNames()
$SubkeysCount = $SubKeys.Count

    foreach($Subkey in $SubKeys){

        if((Test-RegistryValue "HKCU:$Integration$Subkey" "Integration Location") -eq $true){

        $SL = Get-ItemProperty "HKCU:$Integration$Subkey" -Name "Staged Location" | select -ExpandProperty "Staged Location"
        $IL = Get-ItemProperty "HKCU:$Integration$Subkey" -Name "Integration Location" | select -ExpandProperty "Integration Location"

        $IL_Expanded = [System.Environment]::ExpandEnvironmentVariables($IL)

        $Result_CAVURIP = New-Object System.Object
        $Result_CAVURIP | Add-Member -MemberType NoteProperty -Name ID -Value $i_CAVURIP
        $Result_CAVURIP | Add-Member -MemberType NoteProperty -Name RegKey -Value "HKCU:$Integration$Subkey"
        $Result_CAVURIP | Add-Member -MemberType NoteProperty -Name StagedLocation -Value $SL
        $Result_CAVURIP | Add-Member -MemberType NoteProperty -Name IntegrationLocation -Value $IL
        $Result_CAVURIP | Add-Member -MemberType NoteProperty -Name IntegrationLocationExpanded -Value $IL_Expanded

            if((test-path "$IL_Expanded") -eq $true){

            $Result_CAVURIP | Add-Member -MemberType NoteProperty -Name IntegrationLocationValid -Value $true

            }

            else {

            $Result_CAVURIP | Add-Member -MemberType NoteProperty -Name IntegrationLocationValid -Value $false

            }

        $Results_CAVURIP += $Result_CAVURIP
        $i_CAVURIP++

        }

    }

    return $Results_CAVURIP

}

Check-UserRegistryIntegrationPackages | ? { $_.IntegrationLocationValid -eq $false }

The main question which I’m sure your thinking about is what can cause this?

Generally, it’s caused by profile management solutions preserving the HKCUSoftwareMicrosoftAppV registry key.

A scenario where this can be caused is a package has been published to the user, you then find that it’s an application that all users require it so you decide to publish it globally (To the machine) and remove the user publishing.

Because the profile management solution is preserving the entire HKCUSoftwareMicrosoftAppV key the user integration is captured and not removed when the user logs back in even though it’s now published globally.

The fix is to remove the entry in the HKCUSoftwareMicrosoftAppV location then the application will launch successfully.

SCRIPT DISCLAIMER
The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.

To conclude if you follow the process above you can find out which package is causing the application launch failure.

David Falkus | Senior Premier Field Engineer | Application Virtualization, PowerShell, Windows Shell

SharePoint Online Development Preferences

$
0
0

I spoke about SharePoint Online and Office 365 here: SharePoint Online and Office365 Business Productivity.

Office 365 along with all its provided SaaS are spreading across the globe and a lot of companies depend on it dramatically in day–day business activities. That raises the need for business to build custom applications to extend and consume the available functionalities, to achieve that let us take a look into different preferences.

There are several ways to develop custom apps and extend the SharePoint Online Functionality:

  1. CSOM (Client Side Script Object Model): Used for getting data and can be used with SharePoint Online and On Premise too, the developers create windows applications, web applications, universal apps, command line application or any type of applications and start calling the classes available. There is JavaScript version of the CSOM called the JSOM (JavaScript Object Model). Note that the CSOM and JSOM are used with Add ins(check the third bullet).
  2. SharePoint Rest APIs: it is like CSOM it can be used for Online and On Premise and it allows you to make direct REST APIs call while in CSOM you can use the available functions and classes.
  3. Add-Ins: previously called Apps model, it is 2 types SharePoint hosted and Provider hosted. In SharePoint hosted addins developers write JavaScript code and it is deployed as client side, while with provider hosted developers can write.net or any server side language and deploy it on a remote server like Azure to handle cases like remote event receivers.
  4. Office 365 Rest APIs: Used mainly for Exchange and OneDrive and available for Office 365 only, there are some SDKs that are built on top of it but not providing the full functionalities.
  5. New announcement of SharePoint Framework
  6. No code solutions like Flow and PowerApps

It is worth mentioning too that you can use PowerShell Scripting to call SharePoint Online APIs.

 

Unsupported development options:

  • Full trust code as used to develop for the On Premise is not an option for developing Online while it is still an option for development on premise.
  • Sandbox: Microsoft recently announced the deprecation of sandbox which used to be sort of subset of full trusted code.

 

Quick steps to put you on the road for developing using CSOM:

  • Install Visual Studio 2015.
  • Download and install NuGet client and integrate it with Visual Studio https://dist.nuget.org/index.html
  • Create project.
  • Add NuGet package.
  • Add reference.
  • Call CSOM.

You can watch this video where I have started using CSOM: https://channel9.msdn.com/Blogs/MVP-Office-Servers-and-Services/Trigger-SharePoint-Online-CSOM

 

References:

 

Written by: John Naguib

Microsoft MVP, Solution Architect/Senior Consultant

Wiki Ninja Blogger, SharePoint Expert and Speaker.

Twitter BlogWiki User Page Channel9

test while working NewPost working on CETestNew & EM

Viewing all 36188 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>