Quantcast
Channel: TechNet Blogs
Viewing all 36188 articles
Browse latest View live

Text Analytics in Python – made easier with Microsoft’s Cognitive Services APIs

$
0
0

By Theo van Kraay, Data and AI Solution Architect at Microsoft

Microsoft’s Azure cloud platform offers an array of discreet and granular services in the AI + Machine Learning domain that allow AI developers and Data Engineers to avoid re-inventing wheels, and consume re-usable APIs.

The following is a straightforward Python-based example illustrating how one might consume some of these API services, in combination with using open source tools, to analyse text within PDF documents.

First, we create a Text Analytics API service in Azure. From your Azure subscription (click here to sign up for free) go to Create a resource -> AI + Cognitive Services -> Text Analytics API:

 

 

Give your API a unique name, select your Azure subscription and region, select your pricing tier, and select (or create) a resource group for your API:

 

 

Hit create:

 

 

When your resource has been deployed, go into it and you will see the following screen. Take a note of the endpoint, and hit “Keys” to get your API key:

 

 

Make a note of the key (either will do):

 

 

For Python, I highly recommend Microsoft’s excellent language agnostic IDE, Visual Studio Code (see here for a tutorial on how to configure it for Python – ensure you use a Python 3 distribution). For this example, in your Python code, you will need the following imports (the IDE should highlight to you which modules you need to install):

 

import urllib.request
import urllib.response
import sys
import os, glob
import tika
from tika import parser
import http.client, urllib
import json
import re
tika.initVM()

 

Creating a function to parse PDF content and convert to an appropriate collection of text documents (to later be converted to JSON and sent to our API) is straightforward if we make use of the parser in the Tika package:

 

def parsePDF(path):
    documents = { 'documents': []}
    count = 1
    for file in glob.glob(path):
        parsedPDF = parser.from_file(file)
        text = parsedPDF["content"]
        text = text.strip('n')
        text = text.encode('ascii','ignore').decode('ascii')
        documents.setdefault('documents').append({"language":"en","id":str(count),"text":text})
        count+= 1
    return documents

 

Note that the Python Tika module is in fact a wrapper for the Apache Foundation’s Tika project, which is an open source library written in Java, so you will need to ensure you have Java installed on the machine on which you are running your Python code. While the tika.initVM() call should instantiate a Java Virtual Machine (JVM), you may need to do this manually depending on your environment. If so, simply locate the JAR file (which should have been pulled down as part of installing the Tika module) and start the server. For example, in a Windows environment, run the following from the command line, having navigated to the location of the JAR file:

Java -jar tika-server.jar

 

 

Once we are sure we have the pre-requisites for parsing the PDF content, we can setup the access credentials for the Text Analytics API, and create a function that will call it for our documents. Initially, we want to do sentiment analysis for the content of each PDF document, so we specify “Sentiment” as the operation within path:

 

# Replace the accessKey string value with your valid access key.
accessKey = 'f70b588bd8d549b4a87bed83d41140b7'
url = 'westcentralus.api.cognitive.microsoft.com'
path = '/text/analytics/v2.0/Sentiment'
 
def TextAnalytics(documents):
    headers = {'Ocp-Apim-Subscription-Key': accessKey}
    conn = http.client.HTTPSConnection(uri)
    body = json.dumps (documents)
    conn.request ("POST", path, body, headers)
    response = conn.getresponse ()
    return response.read ()

 

From there, the code to invoke the functions is simple:

 

docs = parsePDF("Data/PDFs/*.pdf")
print(docs)
print()
print ('Please wait a moment for the results to appear.n')
result = TextAnalytics (docs)
print (json.dumps(json.loads(result), indent=4))

 

You should see a response something like the below, showing the sentiment score for two PDF documents:

 

 

If we want to shift the analytical focus to key phrase learning, we simply change the API operation to “KeyPhrases”:

 

path = '/text/analytics/v2.0/keyPhrases'

 

The output would change accordingly:

 

 

The Cognitive Services Text Analytics API also supports language detection:

 

path = '/text/analytics/v2.0/languages'

 

 

Note that these services do have some upper character limits in terms of documents within a request, and the overall size of a JSON request, so you will need to bear this in mind when consuming the services.

For more samples on using Azure Cognitive Services Text Analytics API with Python and many other languages, see here. To learn more about the full suite of Azure AI & Cognitive Services, see here.


Windows Subsystem for Linux and BASH Shell (2018 Update)

$
0
0

Hello Everyone! Allen Sudbring here again, PFE in the Central Region, with an update to a blog post that I did on the Windows Subsystem for Linux and Bash On Ubuntu, found here,(link: https://blogs.technet.microsoft.com/askpfeplat/2016/05/02/installing-bash-on-ubuntu-on-windows-10-insider-preview/).

It's been awhile since I posted on this topic and I wanted to update everyone with the exciting new options with the Windows Subsystem for Linux and different Linux distributions that are now available in the Windows store for download.

First, a little history. Back before the Windows 10 Anniversary update, we introduced the Windows Subsystem for Linux in the Windows Insider Preview. It was a new feature that allowed users to install a full Linux bash shell in windows. Introducing this feature made the reality of an all in one administration/developer workstation a reality. The need to run a Linux VM to access the Linux tools or other work arounds that have been used throughout the years to port Linux tools to Windows were no longer needed.

The install before did not have the option of multiple Linux distributions as well as choosing those distributions from the Windows Store

Instead of re-inventing the wheel, docs.microsoft.com has a great article on how to install the Windows Subsystem for Linux on Windows 10, as well as the exciting news of the ability to install the WSL on Windows Server starting with version 1709.

Windows Subsystem for Linux Documentation

From <https://docs.microsoft.com/en-us/windows/wsl/about>

Windows 10 Installation Guide

From <https://docs.microsoft.com/en-us/windows/wsl/install-win10>

Windows Server Installation Guide

From <https://docs.microsoft.com/en-us/windows/wsl/install-on-server>

I encourage everyone to check out this new feature, especially if you manage Linux and Windows Server or do cross-platform development!!

Protected: Invoke-Adversary – Simulating Adversary Operations

$
0
0

This content is password protected. To view it please enter your password below:

Microsoft Azure Databricks

$
0
0

Lors de l'évènement Datavore 2018 à Montréal, j'ai participé à l'animation d'un atelier sur Azure Databricks. Cet article reprend les différentes étapes de l'atelier que j'ai délivré.

Présentation d'Azure Databricks

Azure Databricks est une plateforme collaborative de données massives et d'apprentissage automatique, qui s'adapte automatiquement à vos besoins. Ce nouveau service Azure consiste en une plateforme d'analyse basée sur Apache Spark et optimisée pour la plateforme de services infonuagiques Azure. Azure Databricks comprend l'ensemble des technologies et des fonctionnalités libres de mise en grappe d'Apache Spark. En outre, grâce à notre nouvelle solution, vous serez en mesure de gérer facilement et en toute sécurité des charges de travail telles que l'intelligence artificielle, l'analyse prédictive ou l'analyse en temps réel.

Atelier

Cet atelier est constitué de 2 grandes parties

Partie 1 : configuration de l'environnement Azure

Partie 2 : utilisation de Databricks

  • Atelier : Data Engineering avec Databricks

Partie 1 : configuration de l'environnement Azure

Création d'un groupe de ressources

Un groupe de ressources est un groupement logique de vos ressources Azure afin d'en faciliter la gestion. Toutes les ressources Azure doivent appartenir à un groupe de ressources.

Portail Azure : https://portal.azure.com

Depuis le portail Azure, cliquez sur « Resource groups » puis sur le bouton « Add »

Renseignez les informations de votre groupe de ressources.

Pour ce laboratoire, choisissez la région « East US 2 ». Cliquez sur le bouton « Create »

Une fois le groupe de ressources créé, une notification apparait en haut à droite de l'écran. Cliquez sur « Go to resource group ».

Votre groupe de ressources est créé et prêt à être utilisé.

Création d'un compte de stockage

Un compte de stockage Azure est un service cloud qui fournit un stockage hautement disponible, sécurisé, durable, évolutif et redondant. Le stockage Azure se compose du stockage d'objets blob, du stockage de fichiers et du stockage de files d'attente. Pour notre laboratoire, nous allons utiliser le stockage d'objets blob. Pour plus d'informations sur le stockage Azure, vous pouvez consulter cet article : https://docs.microsoft.com/fr-fr/azure/storage/

Pour comprendre un peu le concept, un compte de stockage peut contenir un ou plusieurs containeurs qui eux-mêmes vont contenir les blobs (dossiers et fichiers)

Le compte de stockage peut être crée de la même manière que le groupe de ressources, via le portail. Mais il est possible aussi d'utiliser le « Cloud shell » intégré au portail. C'est ce que nous allons faire ci-dessous.

En haut du portail Azure, cliquez sur l'icône « Cloud shell ».

Le « Cloud shell » va s'ouvrir en bas de l'écran. Cliquez sur « Bash (Linux) »

Si c'est la première fois que vous exécutez le « Cloud shell », la fenêtre suivante va apparaître. Choisissez votre abonnement Azure, puis cliquez sur « Create storage ».

Une fois paramétré, le « Cloud shell » est prêt à être utilisé.

La création d'un compte de stockage peut se faire avec la commande suivante :

az storage account create --name <yourStorageAccount> --resource-group <yourResourceGroup> --location eastus2 --sku Standard_LRS

Dans le cas de cet exemple, voici la ligne de commande correspondante :

az storage account create --name datavorestorage --resource-group datavore --location eastus2 --sku Standard_LRS

Au bout de quelques secondes le compte de stockage est créé. Cliquez sur le bouton « refresh »de votre groupe de ressources. Votre compte de stockage doit alors apparaître.

Si vous cliquez sur votre compte de stockage, vous accéderez alors aux différentes fonctions et propriétés du compte de stockage. Notamment pour retrouver les informations de connexions en cliquant sur « Access Key ».

Maintenant que le compte de stockage est disponible, nous allons créer les containeurs pour y stocker nos différents fichiers.

Avant de pouvoir créer les containeurs, il faut pouvoir accéder au compte de stockage. Pour cela, il faut récupérer les clefs d'accès au compte. Ci-dessous une commande permettant de récupérer les clefs.

az storage account keys list --account-name <yourStorageAccount > --resource-group <yourResourceGroup> --output table

Dans notre exemple, voici la commande utilisée

az storage account keys list --account-name datavorestorage --resource-group datavore --output table

Copiez une des 2 clefs puis exportez les informations de votre compte de stockage

export AZURE_STORAGE_ACCOUNT="<YourStorageAccount>"

export AZURE_STORAGE_ACCESS_KEY="<YourKey>"

Voici les lignes de commande concernant notre laboratoire

export AZURE_STORAGE_ACCOUNT="datavorestorage"

export AZURE_STORAGE_ACCESS_KEY=" yqwBdGRgCW0LWdEuRwGdnxKPit+zXNrrVOxXQy57wq6oHmCSy2NnoA3Pr9E4pMgJPwcVeg8uQt1Uzk5YAWntiw=="

Création des containeurs

az storage container create --name nyctaxi-consumption
az storage container create --name nyctaxi-curated
az storage container create --name nyctaxi-demo
az storage container create --name nyctaxi-raw
az storage container create --name nyctaxi-ref-data
az storage container create --name nyctaxi-staging
az storage container create --name nyctaxi-scratch

Pour vérifier la création de vos containers, depuis le portail Azure, cliquez sur votre stockage Azure

Dans la vue « Overview », cliquez sur « Blobs »

Vous devez retrouver les containeurs créés précédemment

Copie des fichiers pour le laboratoire

Données de références

Les fichiers sont stockés sur un de nos comptes de stockage. Ne modifiez surtout pas la clef ci-dessous.

Exécutez ce script dans le « Cloud Shell »

export SRC_STORAGE_ACCOUNT="franmerstore"

export SRC_STORAGE_ACCESS_KEY="eSSFOUVLg4gB3iSxuFVh/lDVoMeQHCqVj67xHdaYcPYoMSUqFuD+E2OeDhY4wRZCEF97nCRGOV0i7WJDyoOd7g=="

Puis la commande suivante :

azcopy --source https://franmerstore.blob.core.windows.net/nyctaxi-staging/reference-data/ --destination https:// <YourStorageAccount>.blob.core.windows.net/nyctaxi-staging/reference-data/ --source-key $SRC_STORAGE_ACCESS_KEY --dest-key $AZURE_STORAGE_ACCESS_KEY --sync-copy ––recursive

Dans notre exemple, voici la commande à exécuter avec le compte de stockage utilisé dans cet exemple :

azcopy --source https://franmerstore.blob.core.windows.net/nyctaxi-staging/reference-data/ --destination https://datavorestorage.blob.core.windows.net/nyctaxi-staging/reference-data/ --source-key $SRC_STORAGE_ACCESS_KEY --dest-key $AZURE_STORAGE_ACCESS_KEY --sync-copy --recursive

Copie des données transactionnelles

Pour ce laboratoire, nous allons prendre juste un sous-ensemble des données. Si vous souhaitez récupérer l'ensemble des données référez-vous à l'index en fin de document (cela prendra au moins 2h).

Pour ce laboratoire, nous allons travailler uniquement sur l'année 2017, mais rien ne vous empêchera par la suite de réexécuter les scripts en changeant simplement l'année

Voici les scripts pour copier les données transactionnelles :

azcopy
--source https://franmerstore.blob.core.windows.net/nyctaxi-staging/transactional-data/year=2017/ --destination https://<YourStorageAccount>.blob.core.windows.net/nyctaxi-staging/transactional-data/year=2017/ --source-key $SRC_STORAGE_ACCESS_KEY --dest-key $AZURE_STORAGE_ACCESS_KEY --sync-copy
--recursive

Dans notre exemple, nous aurons donc :

azcopy
--source https://franmerstore.blob.core.windows.net/nyctaxi-staging/transactional-data/year=2017/ --destination https://datavorestorage.blob.core.windows.net/nyctaxi-staging/transactional-data/year=2017/ --source-key $SRC_STORAGE_ACCESS_KEY --dest-key $AZURE_STORAGE_ACCESS_KEY --sync-copy
--recursive

Pour vérifier les données, vous pouvez exécuter la commande suivante :

az storage blob list
--container-name nyctaxi-staging --output table

Création d'une base Azure SQL Database

Depuis le portail Azure, cliquez sur « + Create a resource », « Database » puis sur « SQL database »

Puis renseignez les valeurs pour votre base de données

Il se peut que vous soyez obligé de créer un serveur SQL.

Cliquez sur « Configure required settings », « Create a new server ».

Renseignez les informations du server SQL. Souvenez-vous bien du mot de passe !

Cliquez sur « Select ».

Continuez à renseigner les informations de votre de base de données puis cliquez sur « Create »

Création des tables SQL

Depuis le portail Azure, cliquez sur votre base de données fraichement créée.

Cliquez sur « Query editor », puis sur « Login »

Entrez vos informations de connexion, puis cliquez sur « Ok ».

Création des tables

Depuis le portail Azure, copier le script T-SQL ci-dessous dans l'éditeur, puis cliquez sur « Run »

DROP TABLE IF EXISTS TRIPS_BY_YEAR;

CREATE TABLE TRIPS_BY_YEAR (

TAXI_TYPE VARCHAR(10),

TRIP_YEAR INT,

TRIP_COUNT BIGINT

);


Répétez l'opération pour les 2 tables suivantes

DROP TABLE IF EXISTS TRIPS_BY_HOUR;

CREATE TABLE TRIPS_BY_HOUR (

TAXI_TYPE VARCHAR(10),

TRIP_YEAR INT,

TRIP_HOUR INT,

TRIP_COUNT BIGINT

);

 

DROP TABLE IF EXISTS BATCH_JOB_HISTORY;

CREATE TABLE BATCH_JOB_HISTORY

(

batch_id int,

batch_step_id int,

batch_step_description varchar(50),

batch_step_status varchar(10),

batch_step_time varchar(25)

);

ALTER TABLE BATCH_JOB_HISTORY

ADD CONSTRAINT batch_step_time_def

DEFAULT CURRENT_TIMESTAMP FOR batch_step_time;

Vous devez voir vos tables depuis le portail Azure


Création d'une ressource Azure Databricks

Depuis le portail Azure, cliquez sur « + Create a resource », « Data + Analytics » puis « Azure Databricks ».

Renseignez les informations de votre « workspace ». Placez le bien dans votre ressource groupe. Sélectionnez la région « East US 2 ».

Cliquez sur « Create ».

Après la création de votre espace de travail Databricks, votre groupe de ressources doit contenir les ressources suivantes :

Provisionnez un cluster et commencez les analyses

Cliquez sur votre ressource Databricks

Puis sur le bouton « Launch Workspace ».

Votre infrastructure Azure est prête. Vous pouvez commencer l'analyse de vos données avec Databricks

Partie 2 : utilisation de Databricks

Atelier : Data Engineering avec Databricks

Création du cluster

Une fois dans l'espace de travail Databricks, sur la gauche, cliquez sur « Clusters », puis sur « Create Cluster »

Voici les paramètres du cluster à utiliser :

  • Databricks Runtime Version : 4
  • Python version : 3
  • Worker Type : Standard_DS13_v2
  • Spark Config : spark.hadoop.fs.azure.account.key.<yourStorageAccount>.blob.core.windows.net <yourKey>

Ci-dessous l'exemple de configuration pour l'atelier 2.

Cliquez sur « Create cluster »

Après quelques minutes, votre cluster est prêt

Notebooks

Sur la gauche de l'écran, cliquez sur « Workspace », puis « Users ».

A droite de votre alias, cliquez sur la flèche puis sur « Import »

Dans la fenêtre « Import Notebooks », cliquez sur « URL » puis copiez l'adresse ci-dessous

https://franmerstorage.blob.core.windows.net/databricks/Notebooks/nyc-taxi-workshop.dbc

Cliquez sur le bouton « Import »

Les notebooks doivent apparaître dans votre espace de travail.

Utiliser un notebook avec un cluster

Avant de pouvoir commencer à utiliser le notebook, il faut attacher ce dernier à un cluster.

Cliquez sur « Detached » puis sélectionnez un cluster.

Configuration des Notebooks

Pensez à renseigner votre compte de stockage

Dans le notebook « 2-CreateDatabaseObjects », renseignez les champs concernant votre base de données Azure.

Dans cet exemple nous aurons donc


Cliquez sur le notebook « 1-LoadReferenceData » pour continuer le laboratoire avec Databricks

Durant les ateliers, pour revenir sur votre espace de travail pour changer de notebooks, cliquez sur « Workspace » à gauche de l'écran.

A partir de maintenant, continuez le laboratoire à partir des notebooks. Ci-dessous des conseils pour les différents notebooks de l'atelier.

Certains travaux vont prendre plusieurs minutes. Vous pourrez vérifier l'avancement en cliquant sut les flèches à gauche des travaux.

Dans la partie « 05-GenerateReports », dans le notebook « Report-1 », vous y trouverez des exemples de rapports.

Au niveau d'un des rapports, cliquez sur l'icône graphique.

Dans le menu contextuel, cliquez sur « Show Code »

Le script retournant les résultats permettant la création des graphiques s'affichent

Sous le graphique, de nombreuses options sont disponibles pour changer la visualisation de vos données

Dans la partie « 06-BatchJob », dans le notebook « GlobalVarsMethods », n'oubliez pas de renseigner les valeurs de votre base Azure SQL

Commande 1

Commande 2

Rapport interactif avec Power BI Desktop

Après avoir fait le dernier notebook du laboratoire, il peut être intéressant de se connecter aux données avec Power BI Desktop.

Depuis Power BI Desktop, cliquez sur « Get Data », « More »

Sélectionnez le connecteur « Spark »

La fenêtre suivante apparaît. Il faut donc fournir les informations de connexions

Pour le moment, l'adresse du cluster Databricks n'est pas triviale à trouver. Mais voici comment faire.

Depuis votre espace de travail Databricks, cliquez sur « Clusters », puis sur le nom de votre cluster (« DatavoreCluster » dans notre exemple)

Une fois dans la page de votre cluster, cliquez sur « JDBC/ODBC ».

Dans le champ « JDBC URL », composez l'adresse du server avec les 2 éléments entourés en rouge, et en ajoutant https au début.

Pour notre exemple cela va donner :

https://eastus2.azuredatabricks.net:443/sql/protocolv1/o/1174394268694420/0317-035245-abed504

Donc, voici ce que ça donne dans Power BI Desktop. Cliquez sur « Ok »

Maintenant, il faut les autorisations .

Pour cela, il est nécessaire de générer un jeton d'accès du côté de Databricks.

Dans l'espace de travail de Databricks, cliquez en haut à droite sur l'icône utilisateur, puis sur « User Settings »

Cliquez sur le bouton « Generate New Token »

Donner un nom explicite à votre jeton et cliquez sur le bouton « Generate »

Depuis la fenêtre qui apparaît, copiez et conservez bien ce jeton, car il ne sera plus possible de le récupérer par la suite.

Cliquez sur le bouton « Done ».

Le jeton est généré !

Du côté de Power BI Desktop, copiez le jeton dans le champ « Password », puis utiliser token dans le champ « User Name » (Effectivement, ça ne s'invente pas ).

Cliquez sur le bouton « Connect ».

Vous voilà prêt pour explorer vos données. Sélectionnez les tables ci-dessous, puis cliquez sur le bouton « Load »

Une fois les données connectées au rapport, cliquez sur l'icône de liaison qui se trouve sur la gauche.

Par un simple glisser-déposer, réaliser les liaisons entre les tables

La fenêtre de liaison apparaît, cliquez sur le bouton « Ok »

Après avoir effectué toutes les liaisons, voici un exemple de ce que vous pouvez obtenir :

Voici un exemple de rapport

Index

Dans le cas où vous souhaitez charger toutes les données dans votre compte de stockage, il est possible de le faire avec la commande suivante

azcopy
--source https://franmerstore.blob.core.windows.net/nyctaxi-staging/ --destination https://<YourStorageAccount>.blob.core.windows.net/nyctaxi-staging/ --source-key $SRC_STORAGE_ACCESS_KEY --dest-key $AZURE_STORAGE_ACCESS_KEY --sync-copy --recursive

Dans notre exemple cette ligne de commande sera donc :

azcopy
--source https://franmerstore.blob.core.windows.net/nyctaxi-staging/ --destination https://datavorestorage.blob.core.windows.net/nyctaxi-staging/ --source-key $SRC_STORAGE_ACCESS_KEY --dest-key $AZURE_STORAGE_ACCESS_KEY --sync-copy –recursive

Microsoft Azure Databricks (English version)

$
0
0

During the datavore 2018 event in Montreal, we delivered a workshop on Azure Databrick. I had to prepare the data preparation and Engineering part of the workshop. In this article I share the content we used during this workshop.

Presentation of Azure Databricks

Databricks is our new collaborative Big Data and Machine Learning platform. This new first-class Azure service is an Apache Spark-based analytics platform optimized for Azure. Azure Databricks comprises the complete open-source Apache Spark cluster technologies and capabilities. Workloads like artificial intelligence, predictive analytics or real-time analysis can be easily and securely handle by Azure Databricks.

This article consists of two major parts

  • Part 1: Setting up the Azure environment
  • Part 2: Using Databricks
    • Workshop: Data Engineering with Databricks

Part 1: Azure configuration

Resource Group creation

A resource group is a logical grouping of your resources in Azure to facilitate its management. All Azure resources must belong to a resource group.

First, you need to connect to the Azure portal: Https://portal.azure.com

From the Azure portal, click « Resource groups » and then click « Add »

Fill in the information of your resource group.

For this lab, choose the region "East US 2". Click on "Create"

Once the resource group is created, a notification appears at the top right of the screen. Click on " Go to Resource Group ».

Your resource group is created and ready to be used.

Create an Azure Storage Account

An Azure Storage Account is a cloud service that provides highly available, secure, durable, scalable, and redundant storage. Azure storage consists of storage of BLOB objects, file storage, and queue storage. For our Lab, we will use the storage of BLOB objects. For more information about Azure storage, you can view this article: https://docs.microsoft.com/fr-fr/azure/storage/

To have a quick overview, a storage account can contain one or more containers that themselves will contain BLOBs (folders and files)

The storage account can be created in the same way as we did for resource group through the portal. But it is also possible to use the « Cloud Shell », natively integrated to the portal. That's what we're going to do below.

At the top of the Azure portal, click the "Cloud Shell" icon.

The « Cloud Shell » will open at the bottom of the screen. Click on "Bash (Linux)"

If this is the first time you run the "Cloud Shell", the next window will appear. Choose your Azure subscription, and then click "Create storage".

Once set, the "Cloud Shell" is ready to be used.

The creation of a storage account can be done with the following command:

az storage account create --name <yourStorageAccount> --resource-group <yourResourceGroup>  --location eastus2 --sku Standard_LRS

In the case of this example, here is the corresponding command line:

az storage account create --name datavorestorage --resource-group datavore --location eastus2 --sku Standard_LRS

After few seconds, the storage account is created. Click on the "Refresh" button of your resource group. Your storage account should then appears in your resource group.

If you click on your account storage, you will then access to the various functions and properties of the storage account. In particular, the information connection by clicking on « Access Key ».

Now that the storage account is available, we'll create the containers to store our different files.

Before creating the containers, you must be able to access the storage account. To do this, you have to get your account key. Below is a command to retrieve your keys (you can also retrieve your keys through the portal as shown above).

az storage account keys list --account-name <yourStorageAccount > --resource-group <yourResourceGroup> --output table

In our example, here is the command used:

az storage account keys list --account-name datavorestorage --resource-group datavore --output table

Copy one of the two keys. Then export the information from your storage account with the following commands:

export AZURE_STORAGE_ACCOUNT="<YourStorageAccount>"

export AZURE_STORAGE_ACCESS_KEY="<YourKey>"

Here are the command lines About our workshop:

export AZURE_STORAGE_ACCOUNT="datavorestorage"

export AZURE_STORAGE_ACCESS_KEY=" yqwBdGRgCW0LWdEuRwGdnxKPit+zXNrrVOxXQy57wq6oHmCSy2NnoA3Pr9E4pMgJPwcVeg8uQt1Uzk5YAWntiw=="

Containers Creations

To create the containers, copy the below scripts in your Cloud Shell

az storage container create --name nyctaxi-consumption
az storage container create --name nyctaxi-curated
az storage container create --name nyctaxi-demo
az storage container create --name nyctaxi-raw
az storage container create --name nyctaxi-ref-data
az storage container create --name nyctaxi-staging
az storage container create --name nyctaxi-scratch

To verify the creation of your containers, from the Azure portal, click your Azure storage account

In "Overview" section, click « Blobs »

You must find the containers created previously

Copying files for the workshop

Reference data

The files are stored in one of our storage account. Do not change the key below!!

Run this script in the "Cloud Shell »

export SRC_STORAGE_ACCOUNT="franmerstore"

export SRC_STORAGE_ACCESS_KEY="eSSFOUVLg4gB3iSxuFVh/lDVoMeQHCqVj67xHdaYcPYoMSUqFuD+E2OeDhY4wRZCEF97nCRGOV0i7WJDyoOd7g=="

And then the following command:

azcopy --source https://franmerstore.blob.core.windows.net/nyctaxi-staging/reference-data/ --destination https://datavorestorage.blob.core.windows.net/nyctaxi-staging/reference-data/ --source-key $SRC_STORAGE_ACCESS_KEY --dest-key $AZURE_STORAGE_ACCESS_KEY --sync-copy --recursive

In our example, here is the command to run with the storage account used in this example:

azcopy --source https://franmerstore.blob.core.windows.net/nyctaxi-staging/reference-data/ --destination https://datavorestorage.blob.core.windows.net/nyctaxi-staging/reference-data/ --source-key $SRC_STORAGE_ACCESS_KEY --dest-key $AZURE_STORAGE_ACCESS_KEY --sync-copy --recursive

Copying transactional data

For this lab, let's just take a subset of the data. If you want to recover the whole dataset, refer to the index at the end of the document (this will take at least 2h).

For this lab, we will only work on the year 2017, but nothing will prevent you later to rerun the scripts by simply changing the year.

Here are the scripts to copy transactional data:

azcopy
--source https://franmerstore.blob.core.windows.net/nyctaxi-staging/transactional-data/year=2017/ --destination https://<YourStorageAccount>.blob.core.windows.net/nyctaxi-staging/transactional-data/year=2017/ --source-key $SRC_STORAGE_ACCESS_KEY --dest-key $AZURE_STORAGE_ACCESS_KEY --sync-copy
--recursive

In our example, so we'll have:

azcopy
--source https://franmerstore.blob.core.windows.net/nyctaxi-staging/transactional-data/year=2017/ --destination https://datavorestorage.blob.core.windows.net/nyctaxi-staging/transactional-data/year=2017/ --source-key $SRC_STORAGE_ACCESS_KEY --dest-key $AZURE_STORAGE_ACCESS_KEY --sync-copy
--recursive

To verify the result of your copy, you can run the following command:

az storage blob list --container-name nyctaxi-staging --output table

Create an Azure SQL Database

From the Azure portal, click on « + Create a resource », « Databases » and then on « SQL Database »

Then fill in the values for your database

  • You may need to create a SQL server.
  • Click on « Configure required settings », « Create a new server ».
  • Fill in the SQL server information. Remember your SQL Server password!
  • Click on " Select ».

Continue to fill in your database information. Then click on « Create »

Create SQL tables

From the Azure portal, click on your newly created database.

Click on « Query Editor », then « Login »

Enter your SQL credentials, and then click "Ok".

Create tables

From the Azure portal, copy the T-SQL script below to the editor, then click on "Run"

Drop Table If exists TRIPS_BY_YEAR;

Create Table TRIPS_BY_YEAR (

TAXI_TYPE varchar(10),

TRIP_YEAR int,

TRIP_COUNT bigint

);


Repeat the operations for the following two tables

Drop Table If exists TRIPS_BY_HOUR;

Create Table TRIPS_BY_HOUR (

TAXI_TYPE varchar(10),

TRIP_YEAR int,

TRIP_HOUR INT,

TRIP_COUNT bigint

);

Drop Table If exists BATCH_JOB_HISTORY;

Create Table BATCH_JOB_HISTORY

(

batch_id int,

batch_step_id int,

Batch_step_description varchar(50),

Batch_step_status varchar(10),

Batch_step_time varchar(25)

);

Alter Table BATCH_JOB_HISTORY

Add Constraint Batch_step_time_def

Default CURRENT_TIMESTAMP For Batch_step_time;

You will see your tables from the Azure portal


Create Azure Databricks

From the Azure portal, click on « + Create a Resource », « Data + Analytics then« Databricks Azure ».

Fill in the information in your « workspace ». Select your resource group and select region « EAST US 2 » in "Location" field.

Click on "Create".

After you create your Databricks workspace, your resource group should contain the following resources:

Provisioning a cluster and start analyzing your data

Click on your Databricks resource

In the "Overview" section, click on "Launch Workspace" button.

Your Azure infrastructure is ready. You can now start your data analysis with Azure Databricks.

Part 2: Using Databricks

Workshop: Data Engineering with Databricks

Create the cluster

In the Databricks workspace, on the left, click on "Clusters", and then "Create Cluster"

Here are the parameters you need to set for your cluster:

  • Databricks Runtime Version: 4
  • Python Version: 3
  • Worker Type: STANDARD_DS13_V2
  • Spark Config: spark.hadoop.fs.azure.account.key.<yourStorageAccount>.blob.core.windows.net <yourKey>

Below is the sample configuration for this Workshop.

Click on "Create cluster"

After a few minutes, your cluster is ready

Notebooks

On the left side of the screen, click on "Workspace" then "Users".

To the right of your alias, click the arrow and then "Import"

In the Window "Import Notebooks", click on "Url" then copy the address below

Https://franmerstorage.blob.core.windows.net/databricks/Notebooks/nyc-taxi-workshop.dbc

Click on the « Import » button

Notebooks must appear in your workspace.

Attach notebook to a cluster

Before you can start using any notebook, you need to attach it to a cluster.

Click on "Detached" and then select a cluster.

Configure your Notebook

In some Notebooks, think to replace value with your storage account information:

In the Notebook "2-CreateDatabaseObjects", fill in the fields for your Azure SQL server, SQL database and SQL Server credentials.

In this example, we will have:


Click on the Notebook « 1-LoadReferenceData » to continue the workshop with Databricks

During the workshops, to return to your workspace to change working notebooks, click "Workspace" to the left of the screen.

From now, continue the workshop with the notebooks. Below are tips that can be useful for the different notebooks in this workshop.

Tips with Notebooks

Some jobs will take several minutes. You can check progress by clicking on the arrows to the left of the jobs.

In the part "05-GenerateReports", in the Notebook "Report-1", you will find reports.

On the upper right of cell, click the graphic icon.

On the contextual menu, click "Show Code"

The script behind the report is displayed, like you can see below:

Under the graph, many options are available to change the visualization of your data

In the part "06-BatchJob", in the Notebook "GlobalVarsMetHods", don't forget to fill in the values of your SQL Azure base.

First part:

Second part:

Interactive report with Power BI Desktop

After doing the latest notebook from the workshop, it can be interesting to connect Power BI Desktop to your Databricks storage .

From Power BI Desktop, click « Get Data » then « More »

Select the "Spark" connector and click "Connect"

The Spark window appears. You need to provide the connection information

Right now, the Databricks cluster address is not trivial to find. But here's how to do it.

From your Databricks workspace, click "Clusters", and then the name of your cluster ("DatavoreCluster" in our example)

From the Databricks workspace, click on "Clusters" and click on " JDBC/ODBC ».

In the « JDBC URL » field, compose the server address with the 2 elements surrounded in red, and add Https to create a valid http address.

For our example this will give:

https://eastus2.azuredatabricks.net:443/sql/protocolv1/o/1174394268694420/0317-035245-abed504

Here's what it gives in Power BI Desktop. Click on "Ok".

Now you need the permissions .

It is necessary to generate an access token.

In the Databricks workspace, click the top right on the user icon, and then click "User settings"

Click on "Generate New Token"

Give your token an explicit name and click on "Generate"

From the "Generate New Token" window, copy and Keep
this token, because it won't longer be possible to recover it later.

Click on "Done".

The token is generated!

From Power BI Desktop, copy the token in the field " Password ", and then use token in the field "User Name" (It's not something that can be made up, isn't it ?).

Click on "Connect".

Now, you are ready to explore your data. Select the 7 tables below, then click "Load"

Once connected to the data, click on the link icon on the left side.

By a simple drag and drop, build the connections between tables.

The "Create relationship" window appears, click on "Ok"

After you have done all the links, here is an example of what you can get:

Here is an example of a report

Index

In the case you want to load all the data in your storage account, it is possible to do this with the following command:

azcopy
--source https://franmerstore.blob.core.windows.net/nyctaxi-staging/ --destination https://<YourStorageAccount>.blob.core.windows.net/nyctaxi-staging/ --source-key $SRC_STORAGE_ACCESS_KEY --dest-key $AZURE_STORAGE_ACCESS_KEY --sync-copy --recursive

In our example this command line will be:

azcopy
--source https://franmerstore.blob.core.windows.net/nyctaxi-staging/ --destination https://datavorestorage.blob.core.windows.net/nyctaxi-staging/ --source-key $SRC_STORAGE_ACCESS_KEY --dest-key $AZURE_STORAGE_ACCESS_KEY --sync-copy –recursive

Excited about Power BI

$
0
0

It has been a while since last I wrote a TechNet blog article. This time I want to share with you my excitement about Power BI & FastTrack. In my role as a Lead FastTrack Manager, I have supported many customer in the process of the enablement and adoption of Office 365 services/workloads.

I have found very useful Office 365 Adoption Content Pack, which contains a number of reports including: Adoption, Storage usage report, Communication report, Collaboration report, Activation report and Access from anywhere report. Feel free to explore this content pack, for further information visit: https://support.office.com/en-us/article/Office-365-Adoption-Content-Pack-77ff780d-ab19-4553-adea-09cb65ad0f1f

 

Happy learning. Let me know if you have any questions.

Best regards,

Jackie

 

Configuration Manager Support Articles

$
0
0

Hello everyone!  I often find myself providing support and troubleshooting articles to many customers and thought it would be beneficial to have a central location of links to reference.  The links below cover specifically the support and troubleshooting articles for System Center Configuration Manager (ConfigMgr) 2012 and Current Branch (CB).

Please note this blog does not include any links addressing specific updates or hotfixes for ConfigMgr.  I have added those links under "Additional References" which will take you directly to our System Center Configuration Manager Team Blog.


Content Distribution

Understanding and Troubleshooting Content Distribution in Microsoft Configuration Manager

https://support.microsoft.com/en-us/help/4000401/content-distribution-in-mcm


Data Replication Service (DRS)

Troubleshooting the Database Replication Service in Microsoft Configuration Manager

https://support.microsoft.com/en-us/help/20033/troubleshoot-database-replication-service-in-mcm


Microsoft Store for Business

Understand the ConfigMgr Management Features and Troubleshoot Issues with Microsoft Store for Business

https://support.microsoft.com/en-us/help/4010214/understand-sccm-management-features-and-troubleshoot-issues-with-msfb


Operating System Deployment (OSD)

Troubleshoot the Install Application Task Sequence in Microsoft Configuration Manager

https://support.microsoft.com/en-us/help/18408/troubleshoot-install-application-task-sequence

Troubleshooting PXE Boot Issues in Configuration Manager

https://support.microsoft.com/en-us/help/10082/troubleshooting-pxe-boot-issues-in-configuration-manager-2012


Reporting Services

Reports don’t run in System Center 2012 R2 Configuration Manager

https://support.microsoft.com/en-us/help/3060813/reports-don-t-run-in-system-center-2012-r2-configuration-manager


Security

How to Enable TLS 1.2 for Configuration Manager

https://support.microsoft.com/en-us/help/4040243/how-to-enable-tls-1-2-for-configuration-manager


Service Connection Point

Configuration manager Service Connection Point Doesn’t Download Updates

https://support.microsoft.com/en-us/help/3187516/configuration-manager-service-connection-point-doesn-t-download-update


Site Administration

Diagnostics

https://support.microsoft.com/en-us/help/2704781/sdp-3-5ee487a8-b2ed-4bc8-80ea-457f9b683c77-system-center-2012-configur

Maintenance Tasks Default Settings

https://support.microsoft.com/en-us/help/2897050/maintenance-tasks-default-settings-in-system-center-2012-and-system-ce


SQL

After the System Center 2012 ConfigMgr SQL Site Database is moved, you cannot create a Software Update Package or Application

https://support.microsoft.com/en-us/help/3057073/after-the-system-center-2012-configmgr-sql-site-database-is-moved-you

SQL Query Times Out or Console Slow on Certain Configuration Manager Database Queries

https://support.microsoft.com/en-us/help/3196320/sql-query-times-out-or-console-slow-on-certain-configuration-manager-d


Supported Configurations

High Availability Options for System Center Configuration Manager

https://docs.microsoft.com/en-us/sccm/protect/understand/high-availability-options

Recommended Hardware for System Center Configuration Manager

https://docs.microsoft.com/en-us/sccm/core/plan-design/configs/recommended-hardware

Site and Site System Prerequisites for System Center Configuration Manager

https://docs.microsoft.com/en-us/sccm/core/plan-design/configs/site-and-site-system-prerequisites

Size and Scale Numbers for System Center Configuration Manager

https://docs.microsoft.com/en-us/sccm/core/plan-design/configs/size-and-scale-numbers

Support for Virtualization Environments in System Center Configuration Manager

https://docs.microsoft.com/en-us/sccm/core/plan-design/configs/support-for-virtualization-environments 

Support for Windows 10 as a Client and ADK

https://docs.microsoft.com/en-us/sccm/core/plan-design/configs/support-for-windows-10

Support for Windows Features and Networks

https://docs.microsoft.com/en-us/sccm/core/plan-design/configs/support-for-windows-features-and-networks

Support Policy for Making Manual Database Changes

https://support.microsoft.com/en-us/help/3106512/support-policy-for-manual-database-changes-in-a-configuration-manager

Supported Active Directory Domains for System Center Configuration Manager

https://docs.microsoft.com/en-us/sccm/core/plan-design/configs/support-for-active-directory-domains

Supported Operating Systems for Clients and Devices

https://docs.microsoft.com/en-us/sccm/core/plan-design/configs/supported-operating-systems-for-clients-and-devices

Supported Operating Systems for Site System Servers

https://docs.microsoft.com/en-us/sccm/core/plan-design/configs/supported-operating-systems-for-site-system-servers

Supported Operating Systems for System Center Configuration Manager Consoles

https://docs.microsoft.com/en-us/sccm/core/plan-design/configs/supported-operating-systems-consoles

Supported SQL Versions for System Center Configuration Manager

https://docs.microsoft.com/en-us/sccm/core/plan-design/configs/support-for-sql-server-versions


Windows Server Update Services (WSUS)

1702 Clients do not get Software Updates from Configuration Manager

https://support.microsoft.com/en-us/help/4041012/1702-clients-do-not-get-software-updates-from-configuration-manager

Configure Software Update Synchronization in System Center Configuration Manager

https://support.microsoft.com/en-us/help/10329/configuring-software-update-synchronization-in-system-center-configura

Fix Windows Update Issues

https://support.microsoft.com/en-us/help/10164/fix-windows-update-errors

How to Troubleshoot Software Update Deployments in System Center 2012 Configuration Manager

https://support.microsoft.com/en-us/help/3090264/how-to-troubleshoot-software-update-deployments-in-system-center-2012

How to Troubleshoot Software Update Scan Failures

https://support.microsoft.com/en-us/help/3090184/how-to-troubleshoot-software-update-scan-failures-in-system-center-201

How to Troubleshoot WSUS

https://support.microsoft.com/en-us/help/4025764/how-to-troubleshoot-wsus

Software Update Management Troubleshooting in Configuration Manager

https://support.microsoft.com/en-us/help/10680/software-update-management-troubleshooting-in-configuration-manager

Software Update Maintenance in System Center 2012 Configuration Manager

https://support.microsoft.com/en-us/help/3090526/software-update-maintenance-in-system-center-2012-configuration-manage

The Microsoft Windows Server Update Services (WSUS)  SelfUpdate Service does not send Automatic Updates

https://support.microsoft.com/en-us/help/920659/the-microsoft-windows-server-update-services-wsus-selfupdate-service-d

Troubleshooting ConfigMgr 2012 Software Update Synchronization Issues

https://support.microsoft.com/en-us/help/10059/troubleshooting-configmgr-2012-software-update-synchronization-issues

Troubleshooting Issues with Windows Client Agents

https://support.microsoft.com/en-us/help/10132/troubleshooting-issues-with-wsus-client-agents

Unable to connect to WSUS Administration Website

https://support.microsoft.com/en-us/help/2737219/unable-to-connect-to-wsus-administration-website

Using Log Files to Track the Software Update Deployment Process in System Center 2012 Configuration Manager

https://support.microsoft.com/en-us/help/3090265/using-log-files-to-track-the-software-update-deployment-process-in-sys

Windows Update Error Code List

https://support.microsoft.com/en-us/help/938205/windows-update-error-code-list


Additional References


I hope this resource is of value to you.  Thank you!

Brandon McMillan, Premier Field Engineer

What’s new for US partners the week of April 9

$
0
0

Find resources that help you build and sustain a profitable cloud business, connect with customers and prospects, and differentiate your business. Read previous issues of the newsletter and get real-time updates about partner-related news and information on our US Partner Community Twitter channel.

Subscribe to receive posts from this blog in your email inbox or as an RSS feed.

Looking for partner training courses, community calls, and information about technical certifications? Read our MPN 101 blog post that details your resources, and refer to the Hot Sheet training schedule for a six-week outlook that’s updated regularly as we learn about new offerings. To stay in touch with us and connect with other partners and Microsoft sales, marketing, and product experts, join our US Partner Community on Yammer.

Top stories

New posts on the US Partner Community blog

New on demand videos

MPN news

Partner webinars available this spring

Learning news

Upcoming events

US Partner Community partner call schedule

Community calls and a regularly updated, comprehensive schedule of partner training courses are listed on the Hot Sheet


Azure Short Videos

$
0
0

I recently blogged about Azure Short Videos - few topics are still WIP and I'll update them soon. However, there are links to hundreds of videos already.

Azure is a fast moving platform with changes coming to it more often and more quickly than ever before. It’s difficult to keep up with the features and how you can take advantage of them. Even I find it difficult to find the right resources. If I need a quick information about say “What is StorSimple?”, the information sometimes isn’t what I’m looking for. So I have compiled a list of short videos that may help you understand the feature or enable you to do some configuration quickly. Do send me a note if you like something or if you don’t or if you come across any good resource!

So have fun learning and understanding Azure - Azure Short Videos!

Arabic Language Pack for SCSM Self Service Portal

$
0
0

Hi All,

one of the challenges we face in our region is providing users with their native Self Service Portal Language. Since Arabic is not part of built-in languages shipped with Service Manager Self Service Portal, we were looking into different options such as having 3rd party portal but not now 🙂

We spent some time looking into the files that SSP is using and we located the language resource files which you can use not only for Arabic, but for any other language even the one used by aliens in Mars 🙂

In this post we will talk about 2 things. First, how to filter the language settings in your portal and select the ones you need instead of having all languages available. Second, we will configuring Arabic Language pack for System Center Service Manager Self Service Portal.

First: Show only preferred languages 

When you click on the language settings (Top Right  Corner) in Self Service Portal, by default 10 or more languages appear to select including Chinese, French, Japanese, ... etc. If you want your users to select among 2 or 3 languages then you have to follow the procedure below:

1- Browse to (C:inetpubwwwrootSelfServicePortalViewsShared) folder

2- Edit (_Layput.cshtml) file using notepad or any other tool. (run as administrator) (Don't forgot to backup the file before editing it)

3- Search the file for "<ul class=lang_menu ..."

4- Remove the lines for necessary languages and keep the ones you want your users to see. Remember to remove the whole line (from <li ------- to -------- </li>)

I removed all languages except English, French and Dutch

5- Refresh your portal ...

Completed .... lets see how can we configure a new language pack 🙂

 


 

Second: Configure Arabic Language pack for SSP 

as we mentioned before, this is not limited to Arabic as you can configure any language you want but in this example we will talk about Arabic language pack. follow the procedure below

1- Browse to (C:inetpubwwwrootSelfServicePortalViewsShared) folder

2- Edit (_Layput.cshtml) file using notepad or any other tool. (run as administrator) (Don't forgot to backup the file before editing it)

3- Add the following line inside <ul class=”language_mune …

<li value="ar-JO" tabindex="12">Arabic</li>

Note: ar-JO???? this is the Arabic Language code of  Jordan. For more info about different language code for countries read https://www.andiamo.co.uk/resources/iso-language-codes

 

4- Browse to folder (C:inetpubwwwrootSelfServicePortalApp_GlobalResources)

5- Copy file (SelfServicePortalResources.en.resx) to your local machine (where Arabic keyboard supported)

6- Rename file to be (SelfServicePortalResources.ar.resx)

7- Edit the file using any tool (notepad++)

8- in the file you can find all words used and translate it into Arabic ... I did that for you also if you want to use my file SelfServicePortalResources.ar_

 

9- Upload the file to the folder (C:inetpubwwwrootSelfServicePortalApp_GlobalResources)

 

10 - Refresh your browser and enjoy 🙂

 

NOTE: if you have no Service Offering with (Arabic) language selected then you won't see any offering. at least create one service offering and select language as Arabic then add some requests offering for it

Thanks for reading

Mohammad Damati

[キャンペーン]大人気 Surface Laptop が最大 25,000 円お得!(6月29日ご納品分まで)【4/10 更新】

使用 Azure Security Center 檢測最新的勒索軟件威脅(又名 Bad Rabbit)

$
0
0

撰 / Principal Security Engineering Manager, Microsoft Threat Intelligence Center

 

Windows Defender 團隊最近用新的勒索軟件威脅 Ransom:Win32 / Tibbar(也被稱為 Bad Rabbit)更新了惡意軟件百科全書。 此更新包含有關緩解新威脅的全面指導。 微軟反惡意軟件解決方案(包括 Windows Defender Antivirus 和 Azure 服務和虛擬機的 Microsoft 反惡意軟件)進行了更新,以檢測並防範此威脅。

本文總結了您可以採取的其他措施,以通過 Azure 安全中心防止和檢測在 Azure 中運行的工作負載遭遇這種威脅。 獲取有關啟用 Azure 安全中心的更多信息。

 

預防

Azure 安全中心會掃描您的虛擬機和服務器以評估端點保護狀態。 計算後確定那些沒有受到充分保護的議題,將同時被給予相關的建議。

 

Azure Security Center

 

深入“Compute”窗格或概覽建議窗格中會顯示更多詳細信息,包括 Endpoint Protection 安裝建議,如下所示:

 

Compute

 

點擊此按鈕會導出一個對話框,允許您選擇和安裝端點保護解決方案,包括 Microsoft 自己的Azure 服務和虛擬機的反惡意軟件解決方案,這些將有助於防範此類勒索軟件威脅。

 

Select Endpoint Protection

 

Azure安全中心免費級客戶可以使用這些建議和相關的緩解措施。

 

偵測

選擇加入 Standard-Tier 的 Azure 安全中心客戶也可以從與 Ransom:Win32 / Tibbar.A(Bad Rabbit)勒索軟件相關的通用和特定檢測中受益。 這些警報通過下面突出顯示的檢測窗格進行訪問,並且需要 Azure 安全中心標準層。

 

Security Center - Overview

 

例如,與勒索軟件相關的通用警報包括:

  • 事件日誌清除哪些勒索軟件(如  Bad Rabbit)的執行
  • 刪除卷影副本以防止客戶恢復數據。 其中一個例子如下所示:

 

All file shadow copies have been detected

 

此外,Azure 安全中心還通過與 Bad Rabbit 相關的特定 IOC 更新了勒索軟件檢測。

 

Possible ransomware evidence detected

 

您應該遵循警報中詳述的補救步驟,即:

  1. 運行完整的反惡意軟件,掃描並確認威脅已被移除。
  2. 安裝並運行 Microsoft 安全掃描程序
  3. 在網絡中的其他主機上預先執行這些操作。

儘管警報與特定主機相關,但複雜的勒索軟件會嘗試傳播到其他附近的機器。 您應用這使用些補救步驟來保護網絡中的所有主機,而不僅僅是警報中標識的主機,這一點很重要。

DLL の植え付けの脆弱性のトリアージ

$
0
0

本記事は、Security Research & Defense のブログTriaging a DLL planting vulnerability” (2018 4 4 日 米国時間公開) を翻訳したものです


ダイナミックリンク ライブラリ (DLL) の植え付け (バイナリの植え付け/ハイジャック/プリロード) の問題は数年に 1 度表面化する傾向があり、マイクロソフトがそのような報告にどう対応するかが明確でない場合があります。このブログでは、DLL の植え付けの問題をトリアージする際に検討される項目を明確にします。

アプリケーションが完全修飾パス名を指定せず、DLL を動的に読み込む場合、Dynamic-Link Library Search Order (英語情報) で説明されているように、Windows は特定の順番で適切に定義されたディレクトリ セットを探して、DLL の場所を特定しようとします。既定の SafeDllSearchMode で使用される検索順序は以下のとおりです:

  1. アプリケーションが読み込まれた場所のディレクトリ
  2. システム ディレクトリ。GetSystemDirectory 関数を使用しこのディレクトリのパスを取得します。
  3. 16 ビットのシステム ディレクトリ。このディレクトリのパスを取得する関数はありませんが、検索されます。
  4. Windows ディレクトリ。GetSystemDirectory 関数を使用しこのディレクトリのパスを取得します。
  5. 現在の作業ディレクトリ
  6. PATH 環境変数の一覧にあるディレクトリ。ただし、App Paths レジストリ キーにより指定されたアプリケーションごとのパスは含まれません。App Paths キーは DLL 検索パスを実行時には使用されません。

既定の DLL の検索順序は、以前のブログの 1 つ “Load Library Safely” でも記載しているように、さまざまなオプションにより変更できます。

アプリケーションにおける DLL の読み込みは、攻撃者が悪意のある DLL を、検索順序に基づき検索されるいずれかのディレクトリに植え付けることができ、植え付けられた DLL が攻撃者がアクセス権を持たない上位検索ディレクトリに見つからない場合、DLL の植え付けの脆弱性になります。たとえば、foo.dll を読み込むアプリケーションがあり、foo.dll がアプリケーション ディレクトリ、システム ディレクトリ、もしくは Windows ディレクトリに存在しない場合、攻撃者は現在の作業ディレクトリにアクセスできれば foo.dll を植え付けることができることになります。DLL の植え付けの脆弱性は攻撃者にとって便利でかつ作業量も少なく済み、DLL の読み込み時に DllMain() が直ちに呼ばれるためにコードの実行は非常に容易です。攻撃者は、アプリケーションが署名されていないバイナリを読み込むことを許可している場合、緩和策をバイパスすることに頭を悩ませる必要はありません。

悪意のある DLL が DLL 検索順序のどこに植え付けられるかによって、脆弱性は大きく以下の 3 つのカテゴリのいずれかに属します。

  1. アプリケーション ディレクトリの DLL の植え付け
  2. 現在の作業ディレクトリ (CWD) の DLL の植え付け
  3. PATH ディレクトリの DLL の植え付け

上記のカテゴリにより私たちの対応が決まります。では私たちが各カテゴリをどのようにトリアージするか、これらカテゴリを見ていきましょう。

アプリケーション ディレクトリの DLL の植え付け

アプリケーション ディレクトリは、アプリケーションが、依存する非システム系 DLL を保持し、それらを信頼のおけるものとして扱う場所です。プログラムのインストール ディレクトリに配置されたファイルは、悪意がなく信頼できると推定され、一般的にディレクトリの ACL によるセキュリティ制御はそれを保護するために使用されます。インストール ディレクトリのバイナリを置き換えることができるユーザーは、ファイルを書き込んだり上書きしたりする権限を持っていると推定されます。アプリケーション ディレクトリはコードのディレクトリと考えられ、そのアプリケーションのコードに関連するファイルが保存されています。そのディレクトリへのアクセス権がない攻撃者がアプリケーション ディレクトリで DLL の上書きを実行できる場合、それは単に 1 つの DLL を置き換える/植え付けるということと比べはるかに大きな問題です。

アプリケーション ディレクトリの DLL の植え付けに関するいくつかのシナリオを見てみましょう。

シナリオ 1: 信頼されたアプリケーション ディレクトリにおける悪意のあるバイナリの植え付け

適切にインストールされたアプリケーションでは、通常アプリケーション ディレクトリは ACL で保護されており、このシナリオにおいては、アプリケーション ディレクトリのコンテンツを変更するには昇格された権限 (通常管理者権限) が求められます。たとえば、Microsoft Word のインストール先は C:Program Files (x86)Microsoft OfficerootOffice16 です。このディレクトリに変更を加えるには管理者権限が必要です。管理者権限を持つ被害者は、信頼できる場所に DLL を配置するよう誘導されたり、ソーシャル エンジニアリングの手法で騙されたりする可能性がありますが、そのような状況の場合、さらに悪事を働くよう誘導・騙されることも考えられます。

シナリオ 2: 信頼されないアプリケーション ディレクトリにおける悪意のあるバイナリの植え付け

XCOPY などインストーラーを使用しないでインストールされるアプリケーション、共有フォルダーに置かれたアプリケーション、インターネットからダウンロードされたアプリケーション、または ACL で制御されていないディレクトリにあるスタンドアローンの実行ファイルなどは、信頼できないカテゴリに該当するいくつかのシナリオです。たとえば、インターネットからダウンロードされ既定の “ダウンロード” フォルダーで実行されるインストーラー (再配布可能パッケージ、ClickOnce により生成された setup.exe、IExpress により生成された自己解凍アーカイブなどを含む) が該当します。信頼できない場所からアプリケーションを起動するのは危険であり、被害者は容易にこれらの信頼できない場所に DLL を植え付けるよう誘導されたり騙されたりします。

 

アプリケーション ディレクトリの DLL の植え付けのカテゴリに該当する DLL の植え付けの問題は、多層防御の問題として扱われ、更新プログラムは将来のバージョンについてのみ検討されます。私たちは、この攻撃に必要とされるソーシャル エンジニアリングの量およびこの問題が本質的には仕様に基づく動作である点から、MSRC で受けたこのカテゴリに属する報告を vNext (次期製品バージョン) で検討する問題として扱います。被害者は悪意のある DLL (マルウェア) をそれが起動されうる場所に保存するよう誘導され、かつ望ましくない操作 (たとえば、マルウェアと同じディレクトリ内でインストーラーを実行する) を実行する必要があります。インストールされていないアプリケーションには、自身がディレクトリを作成しない限り、"信頼できる安全なディレクトリ / バイナリ" の参照点がありません。理想的には、インストーラーが (それ以上の DLL の植え付けを防止するために) ランダム化された名前を持つ一時ディレクトリを作成し、そこにバイナリを展開してアプリケーションをインストールするべきです。攻撃者が被害者のシステム (たとえば “ダウンロード” フォルダー) にマルウェアを配置する際に、ドライブバイ ダウンロードを利用する可能性はありますが、その攻撃の本質はソーシャル エンジニアリングです。

Windows 10 Creators Update では、アプリケーション ディレクトリの DLL の植え付けの脆弱性を緩和するために使用可能な新たなプロセス軽減策を追加しました。この PreferSystem32 という新たなプロセス軽減策は、適用されると、DLL 検索順序におけるアプリケーション ディレクトリと system32 の順序を切り換えます。これにより、アプリケーション ディレクトリに植え付けられたいかなる悪意のあるシステム バイナリもハイジャックされません。これは、プロセスの生成が制御できるシナリオにおいて有効にできます。

現在の作業ディレクトリ (CWD) の DLL の植え付け

一般的にアプリケーションが呼び起こされる元となるディレクトリをアプリケーションは CWD と位置付けます。これは、アプリケーションが既定のファイル関連付けに基づいて起動された場合にも当てはまります。たとえば、“D:tempfile.abc”’ という共有フォルダーからファイルをクリックすることで、“D:temp” が .abc というファイル形式に関連付けされたアプリケーションの CWD としてセットされます。

特に WebDAV 共有など、リモートの共有フォルダーでファイルをホストするシナリオは、CWD の DLL の植え付けの問題をより脆弱にします。このようにして攻撃者は悪意のある DLL をファイルと共に保存し、ソーシャル エンジニアリングにより被害者にファイルを開かせ/クリックさせ、悪意のある DLL がターゲットとするアプリケーションにより読み込まれるように仕向けます。

シナリオ 3: CWD における悪意のあるバイナリの植え付け

最初の 3 つの信頼できる場所から DLL を読み込むことができない場合、アプリケーションは、信頼できない CWD からその DLL を探します。被害者が \server1share2 という場所から .doc ファイルを開こうとすると Microsoft Word が起動しますが、Microsoft Word が、依存する DLL の 1 つ oart.dll を信頼できる場所から見つけることができない場合、Word は CWD である \server1share2 からそのファイルを読み込もうとします。その共有フォルダーは信頼できない場所であり、攻撃者は容易にアプリケーション oart.dll を植え付けることができます。

トリガー => \server1share2openme.doc
アプリケーション => C:Program Files (x86)Microsoft OfficerootOffice16Winword.exe
アプリケーション ディレクトリ => C:Program Files (x86)Microsoft OfficerootOffice16
CWD => \server1share2
悪意のある DLL  => \server1share2OART.DLL

CWD の DLL の植え付けのカテゴリに該当する DLL の植え付けの問題は、緊急度が重要な問題として扱われ、Microsoft はこの問題に対してセキュリティ更新プログラムを公開します。過去に私たちが修正した DLL の植え付けの問題のほとんどはこのカテゴリに該当し、セキュリティ アドバイザリ 2269637 でそのサブセットが確認できます。ここで、アプリケーション ディレクトリやシステム ディレクトリ、もしくは Windows ディレクトリに存在しない DLL をなぜマイクロソフトのアプリケーションが読み込むのかと疑問に思われるかもしれません。それが起きるのは、さまざまなオプション コンポーネント、異なる OS エディション、そして複数のアーキテクチャが異なるバイナリ セットを付帯しているためで、時としてアプリケーションが DLL を読み込む前に効果的に検討/検証できないことがあるのです。

PATH ディレクトリの DLL の植え付け

DLL 検索順序で DLL を探す最後の場所は PATH ディレクトリです。PATH ディレクトリは、アプリケーションや関連ファイルを探す際のユーザー エクスペリエンスを向上するためにさまざまなアプリケーションにより追加されたディレクトリのセットです。

PATH 環境変数にあるディレクトリは、常に管理者権限により制御されており、一般ユーザー権限ではこれらディレクトリのコンテンツを変更できません。もしだれでも書き込みができるディレクトリが PATH によりさらされていたら、DLL 読み込みという単なる 1つのインスタンスではなく大きな問題であり、私たちは緊急度重要の問題として扱います。しかし、DLL の植え付けの問題だけであれば、この植え付けの脆弱性でセキュリティの境界を越えることは想定されないため、低度のセキュリティの問題とみなします。よって、PATH ディレクトリの DLL の植え付けに該当する DLL の植え付けの問題は修正予定なしとして対応されます。

結論

上記の説明により、報告された DLL の植え付けの問題を私たちがどのようにトリアージし、どのような状況をセキュリティ更新プログラムを公開するほどの重要性があると判断するかについての疑問が解消されると期待しています。以下に、セキュリティ リリース (ダウン レベル修正) を通じて修正するもの、しないものに関する簡単なガイドを示します。

マイクロソフトがセキュリティ修正として対処するもの

CWD のシナリオ – 関連付けられたアプリケーションが信頼できない CWD から DLL を読み込んでしまうもの

マイクロソフトが次に製品がリリースされるタイミングで対処することを検討するもの

アプリケーション ディレクトリのシナリオ – 明示的な読み込みか暗黙的な読み込みかに基づく製品開発グループの判断に依存します。明示的な読み込みでは手を加えられますが、暗黙的な読み込み (依存した DLL) はパスを制御できないため完全に仕様となります。

マイクロソフトが対処しないもの (脆弱性ではないもの)

PATH ディレクトリのシナリオ – PATH では管理者権限を必要としないディレクトリはないため、悪用はできません。

 

-----

Antonio Galvan, MSRC

Swamy Shivaganga Nagaraju, MSRC Vulnerabilities and Mitigations Team

OneDrive & OneDrive For Business 문제 발생 시 Troubleshooting 도구

$
0
0

[주의 사항]

본 블로그에 게시된 정보의 내용 (첨부 문서, 링크 등)은 작성일 현재 기준이며 예고없이 변경 될 수 있습니다.

또한, 참고용으로만 제공됨으로 Microsoft에 책임이 없음을 알려 드립니다. 반드시 적용 전 충분한 테스트를 진행하시기 바랍니다.

 

[요약]

OneDrive & OneDrive For Business 문제 발생 시 Troubleshooting 도구

 

[원인 또는 해결 방법]

 

<Easy fix Tool을 통한 방법>

아래 링크를 통해 Easy fix 파일을 다운로드 후 동기화 문제 등이 발생하는 PC에서 실행합니다.

[참고] 비즈니스를 위한 OneDrive 통해 SharePoint 라이브러리를 동기화 할 때 제한
https://support.microsoft.com/ko-kr/help/2933738/restrictions-and-limitations-when-you-sync-sharepoint-libraries-to-you[실행 예시]

 

모든 윈도우 탐색기를 닫은 후 실행합니다.

 

아래와 같이 2개의 보고서 파일이 생성됩니다.

해당 보고서 파일에서는 현재 OneDrive For Business 구성에서 제한 사항에 해당하거나, 지원되지 않는 문자가 사용된 경우를 확인하여 결과를 리포트 해줍니다.

 

[예시]

*아래 경우에는 별도의 문제가 발견되지 않았습니다.

이후, NEXT를 클릭하면 현재 PC에 존재하는 문제를 해결하는 작업을 진행합니다. 이 때, 문제가 되는 지원되지 않는 문자를 수정하거나, 동기화에 문제가 되는 부분을 수정합니다.

 

[작업이 완료된 화면]

 

<OneDrive for Business cache 삭제 후 재 동기화>
문제 발생 PC에서 OneDrive for Business cache를 제거한 후 재 동기화를 수행하여 충돌을 해결할 수 있습니다. 다만, 재 동기화 과정에서 기존에 PC에 동기화 되었던 문서들은 모두 사라지고 처음부터 다시 동기화 되게 됩니다. 실제로 데이터를 모두 지우기 때문에 예상치 못한 이슈를 대비하여 기존 문서 들을 백업 후 해당 작업을 진행하시기 바랍니다.
아래 링크를 통해 Easy Fix 툴을 다운로드 받은 다음, 실행하여 OneDrive for Business cache 삭제를 진행할 수 있습니다.

[참고] How to remove the OneDrive for Business cache by using the "Easy fix" tool

https://support.microsoft.com/en-us/help/3038627/how-to-remove-the-onedrive-for-business-cache-by-using-the-easy-fix-to

 

<References>

비즈니스용 OneDrive 동기화 앱 업데이트

https://support.office.microsoft.com/ko-kr/article/%eb%b9%84%ec%a6%88%eb%8b%88%ec%8a%a4%ec%9a%a9-onedrive-%eb%8f%99%ea%b8%b0%ed%99%94-%ec%95%b1-%ec%97%85%eb%8d%b0%ec%9d%b4%ed%8a%b8-49771c73-e7ad-4d26-bff1-50bb12a83817?ui=ko-KR&rs=ko-KR&ad=KR

 

Office 2013에 대한 업데이트 기록

https://support.office.com/ko-kr/article/office-2013%ec%97%90-%eb%8c%80%ed%95%9c-%ec%97%85%eb%8d%b0%ec%9d%b4%ed%8a%b8-%ea%b8%b0%eb%a1%9d-19214f38-85b7-4734-b2f8-a6a598bb0117?wa=wsignin1.0&ui=ko-KR&rs=ko-KR&ad=KR

 

OneDrive 동기화 문제 해결

https://support.office.com/ko-kr/article/onedrive-%eb%8f%99%ea%b8%b0%ed%99%94-%eb%ac%b8%ec%a0%9c-%ed%95%b4%ea%b2%b0-83ab0d8a-8400-45b0-8dcf-dc8aa8a6bcf8?ui=ko-KR&rs=ko-KR&ad=KR

 

비즈니스를 위한 OneDrive 통해 SharePoint 라이브러리를 동기화 할 때 제한 (groove.exe)

https://support.microsoft.com/ko-kr/help/2933738/restrictions-and-limitations-when-you-sync-sharepoint-libraries-to-you

 

동기화에 대 한 파일 크기 제한 모든 SharePoint 라이브러리에 최대 2 기가바이트 (GB)의 파일을 동기화 할 수 있습니다.

동기화를 할 수 있는 항목의 수

모든 동기화 된 라이브러리 간에 총 20000 개까지 항목을 동기화 할 수 있습니다. OneDrive 비즈니스 라이브러리, 팀 사이트의 라이브러리, 또는 둘 다 포함 됩니다. 폴더와 파일이 포함 됩니다. 별도로 전체 동기화 제한에는 각 라이브러리 유형에 대 한 동기화 될 수 있는 항목 수에 제한이 있습니다.  

  • 20000 개까지 비즈니스 라이브러리를 OneDrive 항목을 동기화 할 수 있습니다. 폴더와 파일이 포함 됩니다.
  • 최대 5, 000 항목을 SharePoint 라이브러리에서를 동기화 할 수 있습니다. 폴더와 파일이 포함 됩니다. 이들은 발견 된 팀 사이트 및 커뮤니티 사이트 등 다양 한 SharePoint 사이트의 라이브러리는 다른 사람이 만든 또는 사이트 페이지에서 만든 라이브러리입니다. 여러 SharePoint 라이브러리를 동기화 할 수 있습니다. 동기화는 모든 팀 사이트에서 모든 동기화 된 전체 20000 항목 제한에 대해 계산 합니다.

 

파일과 폴더를 동기화할 때의 제한 사항 (onedrive.exe)

https://support.microsoft.com/ko-kr/help/3125202/restrictions-and-limitations-when-you-sync-files-and-folders

 

 
동기화할 수 있는 항목의 수 SharePoint Online은 라이브러리당 문서 3천만 개를 저장할 수 있지만, 단일 비즈니스용 OneDrive 사이트 또는 팀 사이트 라이브러리에 저장하는 파일 수가 10만 개를 넘으면 OneDrive 동기화 성능이 저하되기 시작할 수 있습니다. 이 제한을 해결하려면 여러 폴더/라이브러리에 파일을 저장하십시오. 비즈니스용 OneDrive 사이트 하나에 파일이 10만 개보다 많은 경우 OneDrive가 계속 동기화를 진행하므로 동기화를 완료하려면 오랫동안 기다려야 할 수 있습니다.

 

웹을 사용하여 문서 라이브러리를 볼 때는 OneDrive의 파일 구조를 설정하는 방식에 영향을 줄 수 있는 기타 제한도 적용됩니다.

 

OneDrive 팀에서는 파일 수가 많은 라이브러리를 보다 효율적으로 처리할 수 있도록 OneDrive를 최적화하기 위해 항상 최선을 다하고 있습니다.

파일 동기화의 크기 제한 비즈니스용 OneDrive 라이브러리로 업로드하는 각 파일에는 15GB(기가바이트)의 파일 크기 제한이 적용됩니다.
파일 및 폴더의 문자 제한 SharePoint Online에서는 파일 이름 경로를 최대 400자까지 입력할 수 있습니다. 특히 라이브러리의 URL 경로가 매우 긴 경우 등 시나리오에 따라서는 이 제한이 400자 미만일 수도 있습니다.
SharePoint Server 온-프레미스 onedrive.exe 비즈니스용 OneDrive 동기화 클라이언트에서는 SharePoint -프레미스 데이터 동기화를 지원하지 않습니다. SharePoint -프레미스 환경에서 동기화하려는 경우에는 이전 비즈니스용 OneDrive 응용 프로그램(groove.exe)을 사용해야 합니다.

SharePoint Tidbit – Optimizing SQL for SharePoint on-prem

$
0
0

Hello All,

First of all, these steps will be true for both SharePoint 2013 or SharePoint 2016 unless mentioned otherwise, as well the steps will be true for all supported versions of SQL unless stated otherwise.

I’m sure you realize this but let it be said SQL is the heart, brain, and body of SharePoint this means if SQL is not performing well then your users will not be happy, so here are some of the things I would recommend:

  1. Set Max Degree of Parallelism (MAXDOP) to 1, due to the way that SharePoint works this is the only acceptable value.
  2. Set AUTO_UPDATE_STATISTICS & AUTO_CREATE_STATISTICS to disable for all databases
  3. Do not ignore the Temp DB, Optimize the database this means several things.
    1. Pre grow database
    2. Split database and transaction logs across multiple disks (NOTE: The faster the drive the better for these files)
    3. Create multiple files for database, one file for each CPU up to max 8 files (NOTE: Files should be the same size)
    4. Set recovery model to simple for this database
  4. Perform SQL Database performance!   Recommend following the recommendations in this article https://www.microsoft.com/en-us/download/details.aspx?id=24282
  5. Split Databases and Transaction logs to separate disks

The following steps will help with performance, by are more complicated:

  1. Split search database and transaction logs to there own Fast disks
  2. For all databases that will not be restored in case of Disaster recovery (ie Search and Configuration) set recovery model to simple
  3. You can further improve Content Database performance by doing the following
    1. Pre grow databases
    2. Split database across multiple files
    3. Place files on fast disks

As a final thought remember to work closely with your DBA, they will be able to help you to get this right.

Pax


MIM Hybrid Reporting with PowerBI

$
0
0

With the release with MIM 2016, it includes a new feature called hybrid reporting which collects identity management activities across different MIM Service systems and provides a unified report in Azure portal. Currently there are only three reports available in Azure Active Directory (Self-Service Password registration/reset and self-service groups activity). In additional, you can also export the data to your security information and event management(SIEM) system for your own custom views. In this blog, I will leverage Microsoft PowerBI platform to analyse those requests and generate some essential report.

Firstly, we need download and install Microsoft Identity Manager Hybrid Reporting Agent  in all MIM Service Servers. Then, the activity data of Identity Manager in JSON structure is sent to the Windows Event Log, in a well-defined path: Application and Services Logs, Identity Manager Request Log. Afterwards, we can start exporting those data to your SIEM.

To simply the data collection, I run following PowerShell command to export those data to a CSV file and import the data into PowerBI desktop by selecting "Get Data"-->"text/CSV"

Get-WinEvent -LogName "Identity Manager Request Log" | foreach {"$($_.id)|$($_.message)"|Out-File MIMRequestNew.csv -Append

During the import of CSV file, PowerBI allows us to customize the delimiter and load them into separate columns, click "Load" to confirm .

Next, we are going to leverage the "The Query Editor" to transform these MIM requests to a table for further reporting. As it is known that the MIM requests are written in JSON format, we shall choose the message column and right click "Transform"-->"JSON".

Once completion, these MIM requests are parsed into the "records" which allows us to navigate the request details.

Then we click the button at the right corner to expand these properties/attributes of the records. please note the attributes of the records vary from different types (CRUD) of MIM request which you shall compose the query editor accordingly.

Finally, all weneed is to publish the dataset into the PowerBI and build the report from there. Below is some basic reports via "Quick insights", enjoy 🙂

 

Tip of the Day: Remote Desktop web client public preview

$
0
0

Today's tip...

As announced at Microsoft Ignite, a new web client is being developed to provide access to virtualized apps and desktops from a browser, without the need to install a local client. This provides a consistent experience across devices, minimizes installation or maintenance costs, and provides quick and easy access from kiosks and other non-personal devices.

The first release of the web client can access apps and desktops published from a Remote Desktop Services deployment, copy text to and from the session (using Ctrl+C and Ctrl+V), print to a PDF file, and is available in 18 languages. Additional functionality will be enabled in future releases based on your feedback.

The web client can be added to an existing Remote Desktop Services deployment running Windows Server 2016 and will be available side-by-side with the existing RDWeb page. As we approach general availability, we are providing the client in preview form to gather your feedback and ensure its readiness.

Get started today with our documentation to install and publish the web client using the new PowerShell scripts. The client can be deployed in production and feedback can be sent to the product team using the Support Email on the About page.

References:

Создание пользовательских наборов элементов

$
0
0

Microsoft Visio содержит десятки наборов элементов для создания схем для различных отраслях бизнеса.
2
В сети интернет доступно большое количество наборов элементов от сторонних разработчиков.
Если вам не удалось найти интересующие наборы элементов, это видео расскажет вам о создании собственных пользовательских наборов элементов.

Рейтинги отвечающих на форумах TechNet в марте 2018

$
0
0

 Форумный рейтинг участников в марте 2018 выглядит следующим образом:

1   Dmitriy Razbornov 
2   Vector BCO 
3   croo 
4   Ilya Tumanov 
5   Антонов Антон 
6   Anahaym 
7   Sergey Ya 
8   Mikhail Efimov 
9   atulyakov 
10  EFIMOVDI 
11  Ilya Ershov 
12  Alexey Klimenko 
13  M.V.V. _ 
14  CheshirCat 
15  Svolotch

Office 365 x 亞洲鑽石級技術盛會 | DevDays Asia 2018 亞太技術年會盛大登場!

$
0
0

DevDays Asia 2018 亞太技術年會是由經濟部工業局指導、微軟主辦,這場隆重的技術盛會將為您帶來為期三天、超過 40 堂的豐富議程,議程形式包含最新技術分享+實際示範+現場實作。

 

Office 365 開發與產業解決方案

活動透過實際案例,了解如何運用 Microsoft Graph 開發出企業級 Office 解決方案,更快速有效地取得並管理用戶行為,並掌握 Office Add-ins 開發技術與優勢;而今年更結合 AI 與 Office 365 的各項新功能,運用 Bot 在 Skype for Business 以及 Microsoft Teams 團隊協作平台打造智慧助理及全新企業級應用服務,不僅提升運作效率,更能在資訊安全以及營運績效方面獲得正面的成果!

 

絕對不容錯過!

當天現場將聚集超過 600 位熱情的開發人員彼此交流激盪開發能量,是亞洲規模最大的鑽石級開發盛會!承襲上一屆的好評與廣大迴響,今年將於 5/28-5/30 在華南銀行國際會議中心盛大展開。難得的機會錯過就不再,趕緊邀請身旁的親朋好友一同來把握!

 

 

Viewing all 36188 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>