Quantcast
Channel: TechNet Blogs
Viewing all 36188 articles
Browse latest View live

Studie: Office 365 Deutschland für die Finanzbranche, den Gesundheitssektor und das Bildungswesen

$
0
0

Die Speicherung von Daten ausschließlich in Deutschland spielt für Entscheider in der digitalen Transformation eine wesentliche Rolle – das haben die Analysten von techconsult in einer aktuellen Studie im Auftrag der Microsoft Deutschland GmbH festgestellt. Office 365 Deutschland, entwickelt für Kunden mit besonders strengen Datenschutz- und Compliance Richtlinien, ist für viele Unternehmen, vor allem für den Mittelstand, der Richtungslenker für die Cloud.

Ein besonderes Augenmerk legt die Studie auf die Branchen Finanzwesen, den Gesundheitssektor und den Bildungsbereich. Die Ergebnisse möchte ich hier vertiefen:

1. Finanzwesen: Sicherheitsfrage hat oberste Priorität

Der Finanzsektor steht vor besonderen Herausforderungen. Die Branche ist weitgehend gesättigt und durch die zunehmende Globalisierung der Finanzmärkte einem hohen Wettbewerbsdruck ausgesetzt. Das verlangt nach Verschlankungen, technisch-organisatorischen Umstrukturierungen und Kosteneinsparungen bei gleichzeitig immer anspruchsvolleren Kunden. Mit der Cloud lassen sich Geschäftsprozesse effizienter durchführen: Sie „ist der Schlüssel eines erfolgreichen Finanzunternehmens“, so die Analysten. Dabei muss die Cloud aber in der Lage sein, die zwei beherrschenden Themen der Branche befriedigend zu lösen: Datenschutz und Datensicherheit sowie Mobilität und Teamarbeit. Nahezu jeder Finanzdienstleister (97 Prozent) räumt den Sicherheitsfragen oberste Priorität ein, für fast drei von vier Unternehmen sind die mobile Nutzung von Produktivitätssuite sowie der mobile Zugriff auf Daten wichtige Kriterien bei der Auswahl der Bürosoftware.

Das führt die Branche zunehmend zu Office 365: Bereits 23 Prozent der befragten Finanzdienstleister nutzen die Lösung von Microsoft aus der Cloud. Neben den gängigen Funktionen von Word, Excel und Outlook schätzt die Branche vor allem die Möglichkeit des mobilen Zugriffs sowie die Video- und Analytics-Funktionen. Über sie ließen sich in der Praxis Effizienz und Produktivität um durchschnittlich 25 Prozent steigern und dabei gleichzeitig die Kosten um 23 Prozent reduzieren.

2. Gesundheitswesen: Mobiler Zugriff auf Daten wächst

Agilität, Effizienz und Wirtschaftlichkeit sind drei wesentliche Erfolgsfaktoren für das Gesundheitswesen. Der Branche digitale Technologien verschreiben, ist eine Aufgabe von Microsoft und seinem Partnernetzwerk. Sowohl die (Weiter)Entwicklung der klinischen Prozesse, als auch die Optimierung der Verwaltung kann durch Cloud-Lösungen wie Office 365 Deutschland profitieren. So müssen zum Beispiel 6 von 10 medizinische Einrichtungen von unterwegs auf Daten zugreifen. Datenkontrolle und Datensicherheit spielen – ähnlich wie im Finanzwesen – eine wesentliche Rolle und werden von über 90 Prozent der Entscheider als zentrale Kriterien für die Wahl der Büro-Software genannt.

Ein zweites Standbein neben dem Datenschutz ist das ständige Weiterreichen von Dokumenten durch verschiedene Instanzen. 80 Prozent der Daten im Gesundheitswesen, sagt die Studie, werden gemeinsam bearbeitet, weitergeleitet oder geteilt. Somit sind strenge Datenschutz- und Compliance Richtlinien eine Grundlage für den Einsatz von cloudbasierten Lösungen in der Branche. Übrigens: Erfolgreiche Praxisbeispiele aus dem Gesundheitssektor haben wir in dieser Feature Story zusammengefasst.

3. Bildungswesen: #besserlernen durch Teamarbeit und Wissenstransfer

Investitionen in (digitale) Bildung und Erziehung gehören in Zeiten der Transformation zu den unerlässlichen Faktoren für den Wandel, für die Zukunft von Schülerinnen und Schülern. Dabei sind Lehren und Lernen zu großen Teilen Teamarbeit, die längst auch abseits von Stift und Papier funktioniert. Cloud-Lösungen spielen bei den Prozessen in Bildungseinrichtungen eine zentrale Rolle, schließlich geht es hier auch um den Wissenstransfer weit über das Klassenzimmer oder den Hörsaal hinaus. So spielen, laut der Studie von techconsult, das Weiterreichen (87 Prozent) und das gemeinsame Bearbeiten von Dokumenten (72 Prozent) eine entscheidende Rolle.

Gleichzeitig nimmt die mobile Nutzung auch in Bildungseinrichtungen mehr und mehr zu. 60 Prozent der Befragten geben an, dass Lehrkräfte mobil von unterwegs oder von Zuhause  arbeiten – gerade zur Vor- und Nachbearbeitung des Unterrichts ist das unerlässlich. Dabei erfolgt ein Zugriff auf personenbezogene Daten, der hohe Sicherheitsstandards erfordert. Die Analysten gehen davon aus, dass dies auf rund 83 Prozent der Daten im Bildungswesen zutrifft. Die Branche greift zunehmend auf Office 365 mit seinen erhöhten Standards in puncto Sicherheit und Vertrauen zurück, denn 93 Prozent der Entscheider nennen als Kriterium für eine Produktivitätssoftware den Schutz vor unbefugtem Zugriff auf die Daten.

keyvisual-microsoft-cloud-in-deutschland-1

Zusammenfassend bietet Office 365 Deutschland die vertrauten Office 365 Dienste der Microsoft Cloud aus deutschen Rechenzentren mit Schutz der Kundendaten durch den unabhängigen Datentreuhänder T-Systems International GmbH, der unter deutschem Recht agiert: Das zusätzliche Angebot umfasst unter anderem die Desktopversionen der kompletten Office-Suite (z. B. PowerPoint, Word, Excel, Outlook), Exchange Online, SharePoint Online, Skype for Business und OneDrive for Business sowie die Project Online Produktfamilie und Visio Pro. Die Roadmap sowie weitere Informationen finden Sie auf dem Microsoft Newsroom, weiterführende Informationen bietet auch die hier verfügbare Aufzeichnung des Virtual Summit zu Office 365 Deutschland.

Die hier behandelte Studie zu Office 365 Deutschland wurde von der techconsult GmbH im Auftrag von Microsoft Deutschland durchgeführt. Befragt wurden 601 Unternehmen zum Einsatz von Bürosoftware. Ansprechpartner innerhalb der Unternehmen waren Entscheider aus dem Management, vornehmlich CIOs und IT-Administratoren.


ulrike_grewe

Ein Beitrag von Ulrike Grewe
Product Marketing Manager Office 365 und Office auf mobilen Endgeräten


微軟監視 Agent 的不同面向

$
0
0

概要:學習為何微軟監視 agent 是伺服器中的一個 agent,卻能夠完成您所有的混合雲監視和管理。

在多重管理和監視方案時,都會遇到一個問題就是各個不同的應用都需要不同的 agent。

客戶可能會想找到一種方法能夠保有 System Center Operations Manager (SCOM) 和 Operations Management Suite (OMS) 的優點,以監視和管理混合雲方案。現在無論是監視 SCOM 或是 OMS ,只需要一種 agent – 微軟監視 Agent (Microsoft Monitoring Agent)。也就是說,您只需要在伺服器上安裝一個 agent 就能達成您在混合雲監視和管理的所有需求。

此外,當您決定要在地端的自動化使用 Azure Automation 並安裝一個 hybrid runback worker 時,微軟監視 Agent 便會被安裝和部署。

接下來從一個已經部署 SCOM 2012 R2 的案例開始。SCOM 2012 R2 所使用的微軟監視 Agent 版本是 7.1.10184.0。您可以在 SCOM 的控制台選擇 Administration > Device Management > Agent Managed 看到 agent 的版本:

 

也可以在 Control Panel > Microsoft Monitoring Agent 查看:

 

現在若我們將伺服器透過 SCOM 連接到 OMS,這不會更新顯示在 SCOM 控制台中的微軟監視 agent 的版本 – 7.1.10184.0。

然而,將伺服器透過 SCOM 連接到 OMS 的一個限制是,現在伺服器只能連接到一個 OMS 工作區。若您想要把伺服器連接到多個 OMS 工作區,需要安裝一個更新版本的微軟監視 agent,詳情請參閱:OMS Log Analytics Agent multi-homing support。接著直接連接微軟監視 agent 到 OMS。

完成此步驟後,可以看到 SCOM 控制台中 agent 的版本改變了:

 

您也會看到在控制台的微軟監視 agent 視窗,更新來引入新的標籤,例如以下對 DC-01 執行此步驟:

 

 

要完成把伺服器從由 SCOM 管理遷移到直接連接,仍需要把伺服器從 SCOM 上移除。要完成次步驟請至:Administration > Operations Management Suite > Managed Computers > Add a Computer / Group:

 

選擇剛剛更新了微軟監視 agent 的電腦,再按下 Remove

 

接著,需要把 OMS 工作區 ID 和憑證 (key) 加到 Control Panel > Microsoft Monitoring Agent > Azure Log Analytics (OMS) 標籤中:

 

 

Access web apps to be retired

$
0
0

It was announced this week that Access Services is going to be retired.

Details of the announcement may be found here:  Updating the Access Services in SharePoint Roadmap
Additional timeline information along with possible methods for exporting your data out of an Access web app may be found here: Access web apps no longer supported

While these articles discuss Access web apps that are provided with Access Services 2013/2016, this retirement also affects Access Web Databases provided with Access Services 2010.

In the coming months, we intend to share any tips/tricks/gotchas discovered while assisting customers through this time of transition exporting their data and finding replacement solutions.

Azure Marketing Workshop for Partners

$
0
0
Jeff Stoffel Cloud Azure
Jeff Stoffel

 

I am always amazed at the quantity and quality of learning opportunities we make available for partners.  I realize that marketing Microsoft Azure to our SMB customers can be challenging.  Sometime we all need a little guidance to determine the best approach. 

 

Now there’s help!  Registration is open again for the Optimize Azure marketing workshop.

 

Apr 26 10-11 am PST https://aka.ms/azuremarketing2

 

Get the tools you need to market Microsoft Azure

 

During this intensive one hour workshop, The marketing team will guide you thru building a 6-week digital marketing campaign using one of three top-selling Microsoft Azure scenarios.

 

You’ll also receive resources, including:

 

Ready-to-Use Marketing Content such as buyer personas, e-Guides, marketing templates, tweet sheets, blog content, Facebook posts and more

Expert Support from our digital marketing team

 

Don’t miss out

 

Adding an Azure Virtual Machine that uses managed disk into an availability set–manual process

$
0
0

Hi Everyone,

This blog post walks you through the manual process of adding an Azure virtual machine that uses managed disk to an availability set using a manual process purely inside the Azure portal (https://portal.azure.com).

This blog assumes you have a great understanding of what is an availability set, if you don’t know what this is, please refer to Azure availability sets guidelines for Windows VMs and Manage the availability of Windows virtual machines in Azure. It also assume that you have the concepts of managed disks in Azure, if not, please refer to Azure Managed Disks Overview.

Finally we need the following items already deployed in Azure:

  • Availability set aligned to be used with managed disks
  • The virtual machine using managed disks that is not yet associated with an availability set

 

This example also assumes that you don’t have any data disk attached to your virtual machine, if that’s the case you need to change the dataDisks in a similar way I do with the osDisk property.

 

Steps

  1. Sign in at https://portal.azure.com
  2. Export the current template definition of you live resource (virtual machine in this case)
  1. At the top navigation bar of Azure Portal, click in the search field and type “resource explorer”
  2. Click on “resource explorer”

    image

  3. Easiest way to navigate over your resources is expanding Subscriptions->Subscription Name->Providers->Microsoft.Compute->virtualMachines->VM Name

    image

  4. On the details pane, where the template shows up, copy all its template contents and save it locally with notepad or any tool you may find appropriate.
  5. Close “Resource Explorer”
  • Delete your virtual machine (yes, as of the day this article was written you can delete the virtual machine in Azure and the disks gets preserved)
  • Create a new deployment
    1. Click on “+ New”, at the search field, type “template deployment” and press enter

      image

    2. Select “Template Deployment” from the “Everything” blade and click “Create”

      image

    3. At the “Custom deployment” blade, click “Edit template”. This will bring a bring a blank template structure.

      image

    4. At line 5, between the square brackets “[]”, paste the template you saved in the file.

      Blank template

      image

      Template after content was pasted

      image

    5. Delete the line that contains “vmId” property, in this guide this is line 7
    6. Replace the contents of imageReference:
    1. From
      "imageReference": {
          "publisher": "OpenLogic",
          "offer": "CentOS",
          "sku": "7.3",
          "version": "latest"
      },

    2. To
      "imageReference": null,
  • Replace the contents of osDisk:
    1. From
      "osDisk": {
          "osType": "Linux",
          "name": "centos02",
          "createOption": "FromImage",
          "caching": "ReadWrite",
          "diskSizeGB": 31
      },

    2. To
      "osDisk": {
        "createOption": "Attach",
        "osType": "Linux",
        "managedDisk": {
          "id": "<Resource ID of your OS Disk>"
        }
      },

      Where <Resource ID of your OS Disk> needs to point to the resource ID of the currently existing OS Disk, which can be obtained by searching for the disk in the search field of the top navigation pane and copying the Resource ID property (note that you need to open a new tab on your internet browser)

      image

      image

  • Replace the content of “osProfile”
    1. From
      "osProfile": {
          "computerName": "centos02",
          "adminUsername": "pmcadmin",
          "linuxConfiguration": {
              "disablePasswordAuthentication": false
          },
          "secrets": []
      },

    2. To
      "osProfile": null,

  • Notice that if your VM have any installed extension, you need to remove them from this template otherwise the deployment may fail, you can always add the extensions back later, dealing with extensions is out of scope of this blog post.
  • Just after the very last property of your template, in my case “name”, which is the virtual machine name, add a comma at the end of the “name” line and add the following line (you must have a comma before this new “apiVersion”, otherwise validation will fail.
    "apiVersion": "2016-04-30-preview"

  • Your template code should be similar to this

    {
      "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
      "contentVersion": "1.0.0.0",
      "parameters": {},
      "resources": [
        {
          "properties": {
            "hardwareProfile": {
              "vmSize": "Standard_A1"
            },
            "storageProfile": {
              "imageReference": null,
              "osDisk": {
                "createOption": "Attach",
                "osType": "Linux",
                "managedDisk": {
                  "id": "/subscriptions/<GUID>/resourceGroups/TEST03-RG/providers/Microsoft.Compute/disks/centos02"
                }
              },
              "dataDisks": []
            },
            "osProfile": null,
            "networkProfile": {
              "networkInterfaces": [
                {
                  "id": "/subscriptions/<GUID>/resourceGroups/test03-rg/providers/Microsoft.Network/networkInterfaces/centos0287"
                }
              ]
            },
            "provisioningState": "Succeeded"
          },
          "type": "Microsoft.Compute/virtualMachines",
          "location": "westus",
          "id": "/subscriptions/<GUID>/resourceGroups/test03-rg/providers/Microsoft.Compute/virtualMachines/centos02",
          "name": "centos02",
          "apiVersion": "2016-04-30-preview"
        }
      ]
    }

  • Click “Save”
  • Select the same exact resource group where your virtual machine existed at the “Resource group” field and same location if available to choose, then click “I agree to the terms and conditions stated above” and click “Purchase”

    image

  • If any errors come up, you can always click in the notification icon at the top navigation pane and select the failed deployment:

     

    image

     

    This will bring the failed deployment blade

     

    image

    Then just click on “Redeploy” button so you can edit the template and fix any issues.

     

    If you have virtual machines using unmanaged disks that needs to be included in an availability set, you can use a PowerShell module that I developed and poste at the TechNet Gallery using this link.

    That’s it for this blog and I hope this helps.

    Azure Active Directory SaaS App Integration In The Azure Portal

    $
    0
    0

    With dates now announced for the public availability of Windows 10 Creators Update, and some of the new enterprise mobility capabilities that it containers, the next few posts in this blog will focus on getting familiar with some AAD tasks you may have been using in the classic Azure Management portal (aka manage.windowsazure.com) but instead showing how they can be done in the Ibiza portal (aka portal.azure.com). Today I’ll focus on SaaS apps in the new portal.

    Figure 1: A customised view of the Azure Portal with a focus on the components of the Enterprise Mobility + Security suite from Microsoft.


    Figure 2: After selecting the Directory tile, we can see the options that are available, including Enterprise applications.


    Figure 3: Enterprise Applications allows us to Add a new app from the details blade, or alternatively we view the available apps from All applications


    Figure 4: After selecting Add we are shown the Categories and Add an application blades, which shows the library of existing SaaS apps that have already been integrated, or we can choose to integrate custom line of business apps, set up the AAD Application Proxy, or add another app that isn’t in the gallery.


    Figure 5: From the gallery I have chosen to integrate Twitter


    Figure 6: To easily identify this app amongst multiple Twitter accounts used in the organisation, I’ve named this one after the account it will be sharing


    Figure 7: Intunedin Twitter now appears in All applications


    Figure 8: As this has just been created, there are no users or groups assigned, and no activity


    Figure 9: You can now Add groups or users to the application


    Figure 10: I have selected an existing AAD Security Group – Intunedin tweeters, and
    can now Assign the app to that group.


    Figure 11: We can now see intunedin tweeters in Users and groups, and can Add others users and groups if needed.


    Figure 12: For Single sign-on for Twitter we choose Password-based Sign-on and then Save


    Figure 13: With Single sign-on enabled, Update Credentials is now available from Users and groups


    Figure 14: After selecting Update Credentials the User Name and Password can be entered for the shared account


    Figure 15: After adding the Cloud user to the intunedin tweeters group, the Intunedin Twitter app appears in MyApps


    Figure 16: Clicking Intunedin Twitter opens Twitter in another tab and signs in via password vaulting

    Testing IM and Web-Conferencing Archiving set to Crtitical

    $
    0
    0

    Organizations often chose to enable IM Archiving for multiple reasons, while some may be for record keeping purposes, others may have a regulatory /compliance requirement to ensure IM Archiving is occurring for every IM and Web-Conferencing Session.

    When an organization is archiving for regulatory /compliance purposes, it may be possible that they are required to stop the service, if Archiving is failing. In Lync Server 2013 and Skype for Business Server 2015, we offer this by means of a setting in the commandlet Set-CSArchivingConfiguration called BlockOnArchiveFailure

    Parameter Required Type Description
    BlockOnArchiveFailure Optional System.Boolean If True, then the IM service will be suspended any time instant messages cannot be archived. If set to False (the default value), IM will continue even if instant messages cannot be archived.

     

    This can also be accessed from the Control Panel and would look similar-to the image below.

    IM Archiving set to Critical

    Just as organizations perform Disaster Recovery Exercises / Routines, to validate that their infrastructure works as intended, and the organization ( or Organizational unit) is prepared with up to date documentation, if, a Disaster event occurs, organizations may also want to test and/or prove that IM and Web-Conferences would fail, if archiving was to fail.

    With Lync Server 2013 and Skype for Business Server 2015, proving that IM and Web-Conferencing would stop, if Archiving was to fail can be a little challenge. Here’s why

    Challenge#1
    If the Archiving Database is Offline, Lync will export storage data to Web-Service File-share (for example \contoso.comLyncFileShare1-WebServices-1StorageServiceDataArchive20161122LyncStd01.contoso.com)

    Challenge#2
    If the Archiving Database is offline, and the Web-Services File-share has not access we would see EVENT ID 32080 and the System would fail-back to C:ProgramDataMicrosoftSkype for Business Server 2015StorageService

    Challenge #3
    If the Archiving Database is offline, and the Web-Services File-share has not access we would see EVENT ID 32080 and access to the path C:ProgramDataMicrosoftSkype for Business Server 2015StorageService is also blocked. The local Database can have 5,000 Items or upto 10 GB ( SQL Express Limitation)

    The challenges mentioned above, can certainly make it ceretainly operationally challenging to undo. There can be a lot of delay in undoing the efforts, which can cause of productivity.

    Solution #1
    Stop LyncLocal Instance on all Lync Front-End Server in the pool, where we want to simulate a failure. Since this is rather common solution, people might want to introduce another solution.

    Solution #2
    Set the LySS Database offline in SQL, so all access from a communications server is blocked. This can be accomplished by running the following query on each Front-End Server

             ALTER DATABASE LySS SET OFFLINE WITH ROLLBACK IMMEDIATE

    As soon as this is completed on an Enterprise Edition Pool or a Standard Edition pool, IM messages will stop transmitting from the pool. Presence will still be available, but both IM and Web-Conferencing would be failing.

    In-order to bring services back to business as usual, one will have to bring the database online by running

            ALTER DATABASE LySS SET ONLINE

    Once the databases in your routing group is online, you will be able use IM and Web-Conferencing again.

     


    Here are some-event logs, which may be useful during testing. I am adding them so the web-page is indexed, and administrators come to an authoritative source, when they search for EVENT ID’s or Descriptions.

    EVENT ID Source Event ID Description
    56717 LS Data Collection IM was blocked in critical archiving mode due to local Storage Service is full or unavailable.
    Cause: Storage Service or its dependent components are not running.
    Resolution:
    Ensure the local Storage Service database is not full and target storages such as SQL Server or Exchange Server are available.
    56800 LS Data Collection Failed to commit session data into the local Storage Service database.
    Error:
    SessionUpdateException: code=Success, reason=, Unable to finalize session, no session items removed, no new items enqueuedat Microsoft.Rtc.Internal.Storage.Queue.LyssQueueDal.FinalizeSession(StoreContext ctx, Guid adapterID, HashSet`1 sessionIDs, List`1 queueItemList)at Microsoft.Rtc.Server.UdcAdapters.UcSessionAdapter.WrapperFinalizeSession(StoreContext ctx, LyssQueueDal dal, HashSet`1 sessionIds, List`1 queueItems)at Microsoft.Rtc.Server.UdcAdapters.UcSessionAdapter.FinalizeSession(StoreContext ctx, LyssQueueDal dal, HashSet`1 sessionIds, List`1 persistItems)at Microsoft.Rtc.Server.UdcAdapters.UcSessionAdapter.PersistSession(StoreContext ctx, LyssQueueDal dal, SessionState entry, Boolean isCriticalMode)Cause: Storage Service or its dependent components are not running.

    Resolution:

    Ensure the local Storage Service database is not full and target storages such as SQL Server or Exchange Server are available.

    32042 LS Storage Service Storage Service API failed to add a message to the queue.

    Add Queue Message failure. EnqueueException: code=ErrorQueueUnhealthy, reason=Unable to Enqueue Message: Storage Queue is not healthy due to errors: Storage Service Database is full.

    . Please retry later.

    at Microsoft.Rtc.Internal.Storage.Api.StorageService.BeginEnqueueMessages(EnqueueMessagesRequest enqueueMessagesRequest, AsyncCallback asyncCallback, Object state)

    Cause: Authentication or authorization failure, bad input parameters, fabric errors, timeouts, other errors.

    Resolution:

    Check event details. Ensure that the caller of Storage Service is properly authenticated using windows authentication, and has the required authorization based on security group membership. Verify that inputs were valid. If problem persists, notify your organization’s support team with the event detail.

    32008 LS Storage Service Unexpected exception.

    Message=Error: Path \contoso.comLyncFileShare1-WebServices-1StorageServiceDataArchive20161122LyncStd01.contoso.com failed to be read for flushed data. Error details: System.IO.IOException: The network path was not found.

    at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)

    at System.IO.FileSystemEnumerableIterator`1.CommonInit()

    at System.IO.FileSystemEnumerableIterator`1..ctor(String path, String originalUserPath, String searchPattern, SearchOption searchOption, SearchResultHandler`1 resultHandler, Boolean checkHost)

    at System.IO.Directory.GetFiles(String path, String searchPattern, SearchOption searchOption)

    at Microsoft.Rtc.Internal.Storage.Sql.LyssDal.CheckFilePathForFlushedFiles(StoreContext ctx, String parentFilePath, Boolean checkArchived, Boolean& errorOccurred, Int32& numDataFilesToReport)

    Exception: The network path was not found.

    Stack Trace: at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)

    at System.IO.FileSystemEnumerableIterator`1.CommonInit()

    at System.IO.FileSystemEnumerableIterator`1..ctor(String path, String originalUserPath, String searchPattern, SearchOption searchOption, SearchResultHandler`1 resultHandler, Boolean checkHost)

    at System.IO.Directory.GetFiles(String path, String searchPattern, SearchOption searchOption)

    at Microsoft.Rtc.Internal.Storage.Sql.LyssDal.CheckFilePathForFlushedFiles(StoreContext ctx, String parentFilePath, Boolean checkArchived, Boolean& errorOccurred, Int32& numDataFilesToReport)

    Cause: Unexpected exception.

    Resolution:

    If problem persists, notify your organization’s support team with the event detail.

    32013 LS Storage Service Cannot perform a LYSS database operation.

    Message=#CTX#{ctx:{traceId:18446744072925107599, activityId:”c0af1230-6791-473f-a13a-76795835de80″}}#CTX# FinalizeSession sproc failed: SprocNativeError = [1105]

    Exception: System.Data.SqlClient.SqlException (0x80131904): Could not allocate space for object ‘dbo.ItemQueue’.’CL_ItemQueue’ in database ‘lyss’ because the ‘PRIMARY’ filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

    at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)

    at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)

    at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)

    at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData()

    at System.Data.SqlClient.SqlDataReader.get_MetaData()

    at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)

    at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite, SqlDataReader ds, Boolean describeParameterEncryptionRequest)

    at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite)

    at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method)

    at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method)

    at System.Data.SqlClient.SqlCommand.ExecuteReader()

    at Microsoft.Rtc.Common.Data.DBCore.Execute(SprocContext sprocContext, SqlConnection sqlConnection, SqlTransaction sqlTransaction)

    ClientConnectionId:8d59a7be-4c40-4747-9d00-33b889057e0c

    Error Number:1105,State:2,Class:17

    Stack Trace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)

    at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)

    at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)

    at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData()

    at System.Data.SqlClient.SqlDataReader.get_MetaData()

    at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)

    at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite, SqlDataReader ds, Boolean describeParameterEncryptionRequest)

    at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite)

    at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method)

    at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method)

    at System.Data.SqlClient.SqlCommand.ExecuteReader()

    at Microsoft.Rtc.Common.Data.DBCore.Execute(SprocContext sprocContext, SqlConnection sqlConnection, SqlTransaction sqlTransaction)

    Cause: Cannot perform an LYSS database operation.

    Resolution:

    Verify that the data is valid and that the LYSS database is available and healthy. If this error caused by Violation of UNIQUE KEY constraint ‘CL_ItemQueue’, then most likely it is due to at attempt to load a duplicate item from the file share. If so, find flushed xml file that contains duplicated key and move the xml file to somewhere else. In addition, please verify the file share is healthy.

    32059 LS Storage Service Space Used by Storage Service DB is at or above the Critical Threshold.

    SQL Edition=Express Edition (64-bit); Space Used Percent=87.5; Critical Threshold Percent=80 queue item counts summary:

    owned: True, status: 2, critical: True, count: 356

    Total queue items: 356, total archived items: 0

    Cause: The DB size can grow bigger under heavier usage as the data in the Storage Service Queue and/or Cache grows. Once Storage Service finishes processing the data, the db will shrink back to normal size. However breaching the critical threshold implies that the normal processing of the data is slow or blocked resulting in so much excessive DB growth that service functionality is now affected and blocked.

    Resolution:

    Check event details to find the root cause of why data is not getting processed. Resolve the root cause to allow Storage Service to start shrinking the DB down naturally. If problem persists, notify your organization’s support team with the event details.

    32089 LS Storage Service A flush of queue items from the Storage Service DB was initiated, and items were exported to the file system.

    Queue size: Error, flushed 1 files to the filesystem. success: True.

    Files: \contoso.comLyncFileShare1-WebServices-1StorageServiceDataArchive20161122LyncStd01.contoso.come1dc38d13ed15269b601a5460e8f9631__1.xml

    Cause: Periodically, or in reaction to the size of the Storage Service queue, we may purge items from the database, exporting them to the file system in order to ensure performance isn’t impacted due to the accumulation of data. These items should be re-imported after the root cause of the accumulation is resolved. Typically this would occur due to an outage of a data storage endpoint (like Exchange), or could be due to a sustained period of high system load.

    Resolution:

    The resource kit tool is available to import exported items back into the DB for processing.

    32090 LS Storage Service Flushed queue Items from the Storage Service DB have been left unattended to for some amount of time and require attention to be imported back.

    Parent Path \contoso.comLyncFileShare1-WebServices-1StorageService. 112 data files are over 5 days old.

    Cause: Periodically, or in reaction to the size of the Storage Service Queue, we may purge items from the database, exporting them to the file system in order to ensure performance isn’t impacted due to the accumulation of data. These items should be re-imported after the root cause of the accumulation has been resolved. Typically this would occur due to an outage of a data storage endpoint (like Exchange), or could be due to a sustained period of load

    32080 LS Storage Service A queue flush operation has encountered a file error.

    Preliminary primary fileShareName parameter: \contoso.comLyncFileShare1-WebServices-1StorageService is unusable. Exception: System.IO.DirectoryNotFoundException: Failed to get DirectoryInfo of \contoso.comLyncFileShare1-WebServices-1StorageService at Microsoft.Rtc.Internal.Storage.Sql.LyssDal.ValidateFileShareName(StoreContext ctx, String fileShareName, String timestamp, LyssDBUsageStatus usageLevel, Boolean isTenantMigration)

    Cause: There may be permission issues to the file share, local file location, temporary directory, or disk is full.

    Resolution:

    Please check event detail and trace log for more information. Please ensure there is write permission to required file locations.

    References:

    · Import Storage Service Data

    · Archiving Options in Lync Server 2013

    · The LCSLog SQL Database is not logging any archiving content

    · Understanding Monitoring and Archiving on Lync Server 2013

    [Script Of Apr. 1]How to fix ‘no Remote Desktop License Servers available to provide a license’


    オープン ソースを活用してクラウド パートナー様のビジネス チャンスを拡大【4/1 更新】

    $
    0
    0

    (この記事は 2017 年 3 月 2 日にMicrosoft Partner Network blog に掲載された記事 Growing Opportunities for Cloud Partners with Open Source の翻訳です。最新情報についてはリンク元のページをご参照ください。)

     

    joel

     

    デジタル変革に伴い、クラウド関連の IT 支出が急増しています。IDC の予測 (英語) によると、パブリック IT クラウド サービスの収益は、2019 年に世界全体で 1,412 億ドルに達します。また 2017 年には、クラウド統合戦略の基盤として、60% 以上の企業がオープン ソースやオープン API を導入する見込み (英語) です。マイクロソフトでは、パートナー様があらゆるお客様のビジネス課題と技術課題の両方を解決できるソリューションを提供することで、このクラウド チャンスをつかみ、デジタル変革を前進させられるように、サポートに取り組んでいます。

    その一環として、オープン ソース テクノロジやオープン スタンダードへの投資を行っています。これにより、パートナー エコシステムの皆様が既存のスキルや資産を活用して、広がり続ける大規模な顧客ベースにサービスを提供することが可能になります。最近では Red Hat とのパートナーシップ (英語)Linux Foundation への参加 (英語) を発表し、オープン ソースとの親和性が高い Azure の Pivotal Cloud Foundry もリリースしました。そのいずれも、マイクロソフトがさまざまなテクノロジ エコシステムでの協業やイノベーションを進めようとしていることの表れです。

    特に Microsoft Azure では、オペレーティング システム、プログラミング言語、フレームワーク、ツール、データベース、デバイスが非常に幅広くサポートされています。このような製品を活用することで、新規のパートナー様も既存のパートナー様も、お客様向けの新たなシナリオを実現できます。

    大きなビジネス チャンスが到来していることは確実です。Azure 仮想マシンの約 3 台に 1 台が Linux を実行しており、1,000 個以上のリポジトリがマイクロソフトの GitHub サイト (英語) で公開されています。

     

     

    チームで Azure の知識とスキルを養成

    マイクロソフトはオープン ソースとの親和性が高いクラウドの提供に取り組んでおり、それと同時に、パートナー様がこのビジネス チャンスの拡大に対応するためのスキルを習得できるようにサポートしています。企業の CIO を対象としたアンケート調査において、デジタル化に向けた課題として最も多く挙げられた要因 (英語) はスキル ギャップでした。適切な専門知識を持った IT パートナー様へのニーズが増えてきています。

    パートナー様が十分な知識とスキルを持った従業員を養成できるように、マイクロソフトはさまざまな技術トレーニングやツール、リソースをご用意しています。最近では、Azure 認定資格を大幅な割引価格で取得できるチャンスについて発表したほか、インタラクティブなオンライン学習環境を使用して自分のペースで進められる Microsoft Azure の無料トレーニング コースを公開しました。この最新の学習モデルは Massive Open Online Course (MOOC) と呼ばれており、コースにはオンライン動画やデモのほか、ラボ、段階別評価、個別の質問サービスなどさまざまな学習方法が用意されています (英語)

    Microsoft Azure のオープン ソース機能を活用した複雑なクラウド対応の Linux ソリューションを設計・構築・実装・管理する能力を証明するには、Linux Foundation と連携した Linux on Azure 認定資格の取得をお勧めします。認定資格が追加されたアセスメントや試験を選択すると、Microsoft Cloud Platform コンピテンシー パートナーの資格要件を満たすことができ、Microsoft Azure 上で構築されたインフラストラクチャやサービスとしてのソフトウェア (SaaS) ソリューションに対する需要の拡大に対応できます。

     

     

    パートナー様の成功事例 – Pivotal の場合

    Azure のプロフェッショナルとして、企業のデジタル変革戦略の実施をサポートしているパートナー様を 1 社ご紹介します。オープン ソースのクラウド ネイティブ プラットフォーム「Cloud Foundry (英語)」のメンバーに名を連ねる、米国企業 Pivotal です。Cloud Foundry は、Java ベースのマイクロサービスの実行に最適な環境であり、エンタープライズ Java の開発者による利用が急増しています。

    この 1 年、マイクロソフトは Pivotal と連携 (英語) して、共通のお客様を対象とした企業向けのエクスペリエンスを開発してきました。たとえば、Azure Marketplace で Pivotal Cloud Foundry をワンクリックでデプロイする機能 (英語) を実現し、簡単に利用を開始できるようにしました。また Azure Service Broker (英語) を提供し、Cloud Foundry アプリケーション内で Azure サービス (Storage、Azure SQL Database、Service Bus、DocumentDB、Redis Cache など) を利用しやすくしました。

    この共同開発ソリューションはエンタープライズ Java アプリケーションを重視しており、Spring に対する最高レベルのサポートが用意されています。Spring は、Twelve-Factor App の迅速な開発に適したアプリケーション フレームワークです。また、Microsoft と Pivotal はサポート チームやワークフローも統合しました。ポータル エクスペリエンスの統合、サポート スタッフの相互トレーニング、継続的な知識共有などにより、お客様にエンタープライズ クラスのサポートをシームレスに提供します。

    Pivotal のグローバル アライアンス担当バイス プレジデントを務める Nick Cayou 氏は次のようにコメントしています。「Ford や Merrill Corp など、多数の共通のお客様 (英語) にご利用いただき、非常に嬉しく思います。お客様の中には、既に大規模な運用環境のワークロードを実行し、クラウドのメリットを活用したクラウド ネイティブ アプリケーションによってビジネスを変革している方もいらっしゃいます。共同事業を成功に導くために、マイクロソフトのさまざまなグループがサポートしてくださったことにたいへん感動しています。たとえば、Azure のトレーニング リソースのおかげで、当社のサポート チームの養成もスムーズに進みました」

     

     

    Microsoft Azure のオープン ソース機能を活用するには

    • トレーニングや認定資格を活用して Azure プラットフォームへの理解を深め、パートナー様のサービスに Azure を組み込むための計画を策定します。

     

     

     

     

     

    What’s new for US partners the week of April 3, 2017

    $
    0
    0

    Find out what’s new for Microsoft partners. We’ll connect you to resources that help you build and sustain a profitable cloud business, connect with customers and prospects, and differentiate your business. Read previous issues of the newsletter and get real-time updates about partner-related news and information on our US Partner Community Twitter channel.

    You can subscribe to receive posts from this blog in your email inbox or as an RSS feed.

    Looking for partner training courses, community calls, and information about technical certifications? Read our MPN 101 blog post that details your resources, and refer to the Hot Sheet training schedule for a six-week outlook that’s updated regularly as we learn about new offerings. Monthly recaps of the US Partner Community calls and blog posts are also available.

    To stay in touch with me and connect with other partners and Microsoft sales, marketing, and product experts, join our US Partner Community on Yammer and see other options to stay informed.

    Top stories

    Windows 10 Creators Update coming April 11

    Time is running out! Polish, finalize, and submit your 2017 Partner of the Year Awards nomination

    Deliver digital transformation impact with Secure Productive Enterprise

    Microsoft Inspire Session Catalog now available

    Add managed services to your cloud offering for increased revenue and profitability

    Help your customers do more with new Windows 10 Pro devices

    Microsoft Azure training, certification offers, and ISV solutions

    Celebrate National Small Business Week, April 30 – May 6

    Microsoft and Adobe announce availability of joint offerings

    Our guide to finding and taking the technical training you need

    US Partner Community call schedule

    Community calls and a regularly updated, comprehensive schedule of partner training courses are listed on the Hot Sheet.

    Top Contributors Awards! Adding Custom Ribbon Tab To Server Ribbon, Novidades no Windows Phone 8

    $
    0
    0

    Welcome back for another analysis of contributions to TechNet Wiki over the last week.

    First up, the weekly leader board snapshot…

     

    01042017

    As always, here are the results of another weekly crawl over the updated articles feed.

     

    Ninja Award Most Revisions Award
    Who has made the most individual revisions

     

    #1 RajeeshMenoth with 195 revisions.

     

    #2 Peter Geelen with 194 revisions.

     

    #3 Sjoukje Zaal with 92 revisions.

     

    Just behind the winners but also worth a mention are:

     

    #4 M.Vignesh with 89 revisions.

     

    #5 Burak Ugur with 78 revisions.

     

    #6 chilberto with 61 revisions.

     

    #7 Ken Cenerelli with 42 revisions.

     

    #8 Jeff (Netwrix) with 41 revisions.

     

    #9 Carsten Siemens with 39 revisions.

     

    #10 José Diz with 22 revisions.

     

    Ninja Award Most Articles Updated Award
    Who has updated the most articles

     

    #1 RajeeshMenoth with 129 articles.

     

    #2 Peter Geelen with 84 articles.

     

    #3 M.Vignesh with 52 articles.

     

    Just behind the winners but also worth a mention are:

     

    #4 Carsten Siemens with 29 articles.

     

    #5 Jeff (Netwrix) with 23 articles.

     

    #6 chilberto with 21 articles.

     

    #7 Burak Ugur with 20 articles.

     

    #8 José Diz with 19 articles.

     

    #9 Ken Cenerelli with 13 articles.

     

    #10 Sjoukje Zaal with 9 articles.

     

    Ninja Award Most Updated Article Award
    Largest amount of updated content in a single article

     

    The article to have the most change this week was SharePoint 2013: Adding Custom Ribbon Tab To Server Ribbon, by sagar pardeshi

    This week’s revisers were Peter Geelen, RajeeshMenoth, M.Vignesh & sagar pardeshi

    Nice article Sagar, still required some work for this article.

     

    Ninja Award Longest Article Award
    Biggest article updated this week

     

    This week’s largest document to get some attention is Windows Phone: Novidades no Windows Phone 8, by WindowsPhoneContent

    This week’s reviser was Burak Ugur

     

    Nice article and images used in this article. Good work!

    Ninja Award Most Revised Article Award
    Article with the most revisions in a week

     

    This week’s most fiddled with article is T-SQL: Common Table Expression(CTE), by Sachin P. It was revised 16 times last week.

    This week’s revisers were RajeeshMenoth, Ken Cenerelli, Sachin P & Waqas Sarwar(MVP)

    Nicely explained the CTE with step by step example.

     

    Ninja Award Most Popular Article Award
    Collaboration is the name of the game!

     

    The article to be updated by the most people this week is TechNet Guru Competitions – March 2017, by Peter Geelen

    Go Gurus Go!! 65 articles submitted in all categories. Go Go Gurus!!

    This week’s revisers were Melick, Peter Geelen, Sjoukje Zaal, Mahdi Tehrani, Sachin P, J Venuto, Baran Mano, Kaviya & PriyaranjanKS

     

    Ninja Award Ninja Edit Award
    A ninja needs lightning fast reactions!

     

    Below is a list of this week’s fastest ninja edits. That’s an edit to an article after another person

     

    Ninja Award Winner Summary
    Let’s celebrate our winners!

     

    Below are a few statistics on this week’s award winners.

    Most Revisions Award Winner
    The reviser is the winner of this category.

    RajeeshMenoth

    RajeeshMenoth has won 10 previous Top Contributor Awards. Most recent five shown below:

    RajeeshMenoth has TechNet Guru medals, for the following articles:

    RajeeshMenoth has not yet had any interviews or featured articles (see below)

    RajeeshMenoth’s profile page

    Most Articles Award Winner
    The reviser is the winner of this category.

    RajeeshMenoth

    RajeeshMenoth is mentioned above.

    Most Updated Article Award Winner
    The author is the winner, as it is their article that has had the changes.

    sagar pardeshi

    This is the first Top Contributors award for sagar pardeshi on TechNet Wiki! Congratulations sagar pardeshi!

    sagar pardeshi has not yet had any interviews, featured articles or TechNet Guru medals (see below)

    sagar pardeshi’s profile page

    Longest Article Award Winner
    The author is the winner, as it is their article that is so long!

    WindowsPhoneContent

    WindowsPhoneContent has won 6 previous Top Contributor Awards. Most recent five shown below:

    WindowsPhoneContent has not yet had any interviews, featured articles or TechNet Guru medals (see below)

    WindowsPhoneContent’s profile page

    Most Revised Article Winner
    The author is the winner, as it is their article that has ben changed the most

    Sachin P

    This is the first Top Contributors award for Sachin P on TechNet Wiki! Congratulations Sachin P!

    Sachin P has not yet had any interviews, featured articles or TechNet Guru medals (see below)

    Sachin P’s profile page

    Most Popular Article Winner
    The author is the winner, as it is their article that has had the most attention.

    Peter Geelen

    Peter Geelen has been interviewed on TechNet Wiki!

    Peter Geelen has featured articles on TechNet Wiki!

    Peter Geelen has won 168 previous Top Contributor Awards. Most recent five shown below:

    Peter Geelen has TechNet Guru medals, for the following articles:

    Peter Geelen’s profile page

    Ninja Edit Award Winner
    The author is the reviser, for it is their hand that is quickest!

    Burak Ugur

    Burak Ugur has won 19 previous Top Contributor Awards. Most recent five shown below:

    Burak Ugur has TechNet Guru medals, for the following articles:

    Burak Ugur has not yet had any interviews or featured articles (see below)

    Burak Ugur’s profile page

     

    Sjoukje Zaal

    Sjoukje Zaal has won 2 previous Top Contributor Awards:

    Sjoukje Zaal has TechNet Guru medals, for the following articles:

    Sjoukje Zaal has not yet had any interviews or featured articles (see below)

    Sjoukje Zaal’s profile page

     

    Waqas Sarwar(MVP)

    Waqas Sarwar(MCSE 2013) has been interviewed on TechNet Wiki!

    Waqas Sarwar(MCSE 2013) has won 28 previous Top Contributor Awards. Most recent five shown below:

    Waqas Sarwar(MCSE 2013) has TechNet Guru medals, for the following articles:

    Waqas Sarwar(MCSE 2013) has not yet had any featured articles (see below)

    Waqas Sarwar(MCSE 2013)’s profile page

     

    What another great week from all in our community! Thank you all for so much great literature for us to read this week! Please keep contributing, keep reading, and of course keep in touch!!

     

    Best regards,

    269
    — Ninja Kamlesh

     

    はじめての Microsoft Azure ~簡易提案書~ のご案内【4/2 更新】

    $
    0
    0

    azureoverview01

    普段、Microsoft Azure をご提案いただいているパートナーの皆さま向けに最新の Azure 機能を IaaS から PaaS、IoT/AI まで網羅的にまとめた 28 ページの冊子 (PPT) をお届けします。

     

    簡易提案書のダウンロード

    冊子の中身を少しだけ抜粋!

    azureoverview02 Azure を IaaS と StorSimpleなどのハイブリッドクラウド と PaaS (IoT, AI など) の3つに分けて、整理しています。
    azureoverview03 はじめてクラウドを検討されるお客様向けに、世の中で、現在どのようにクラウドが使われているか、代表的な5つにパターン分けし、お客様とディスカッションできるようにしました。
    azureoverview04 Azure の各種情報の収集の仕方やセミナー情報など、今後 Azure と付き合っていく上での主たる情報源を記載しています。
    azureoverview05 ほとんどのお客様が検討される IaaS では、「オンプレミスとクラウドをどのように接続するか?」が焦点になります。VPNと Express Route のつなぎ方を説明するページがあります。
    azureoverview06 Azure 仮想マシンのサイズと特徴を1ページで解説しています。
    azureoverview07 Azure の特徴であるハイブリッドクラウド、そして運用管理の特徴的なツール群を解説しています。StorSimple、OMS、Azure Active Directory などです。
    azureoverview08 Azure の先進的な使い方、PaaS、IoT、AI に関する情報を網羅しています。
    azureoverview09 IoT のシステムは、どんなもの? イメージがつかないと、良くお問い合わせいただく IoT のシステムについて、イメージを記載いたしました。Azure PaaS だけで作る Azure IoT をイメージしていただけます。
    azureoverview10 冊子の最後には、実際に、いくらくらいなの? という疑問に回答できるよう、IaaS 2種類、IoT システムの月額料金の参考例を入れています。

    Miniseriál o SCCM (System Center Configuration Manager)- díl 3. SCCM 2012 R2 – Instalace SQL serveru

    $
    0
    0

    Instalace SQL serveru

    Podporováno je několik verzí SQL serverů s různými SP. Je třeba si důkladně prostudovat dokumentaci na stránkách Microsoftu, kterou verzi pro různé části System Center můžeme použít. To platí v případě, že využijeme několik produktů System Center. Pokud instalujeme pouze SCCM, tak podrobný popis podporovaných verzí je na jiné stránce technetu Microsoftu. Obecně lze říci, že můžeme bez problému použít verzi SQL server 2012 bez SP nebo s libovolným SP1 až SP3. Samozřejmě lze doporučit provést aktualizaci na SP3, pokud bychom použili nižší verzi.

    Budeme potřebovat i nějaké účty, pod kterými se budou spouštět služby SQL serveru a pod kterými bude pracovat SCCM. Pro testovací účely nebo jednoduché použití (škola, menší firma) nám postačí jeden účet. Je vhodné účet vytvořit v AD. Na názvu příliš nezáleží, ale měli bychom poznat, k čemu se používají. Já používám doménové účty „nymsa-testsccmadmin“ pro SCCM a „nymsa-testscomadmin“ pro SCOM. Pokud někdo chce vytvářet různé účty, může se inspirovat např. zde v části SCCM Accounts. Doménový účet sccmadmin (případně scomadmin) musí být skupiny Domain Admins.

    Postup instalace SQL serveru:

    Spustíme instalace a vybereme Installation – New SQL server stand-alone installation or

    Proběhne test, který musí být úspěšný (někdy se může objevit warning)

    01-instalace

    Nyní zadáme Produkt Key a potvrdíme

    Odsouhlasíme licenci a zvolíme si, zda chceme povolit odesílání některých informací (konfigurace a využívání SQL serveru) společnosti Microsoft

    Povolíme stažení update pro SQL server (pokud se nabídne)

    02-sql-update

    Proběhne další kontrola (musí úspěšně)

    03-kontrola-2

    Vybereme SQL server Feature Installation

    04-sql-server-features

    Vybereme tyto funkce:

    –        Database Engine Services

    –        Full-text and Semantic Extractions for Search

    –        Analysis Services

    –        Management Tools – Basic

    • Management Tools – Complete

    05-sql_server_features-vyber-2

    Adresář pro instalaci necháme výchozí

    Proběhne další kontrola, která opět musí být úspěšná

    Vybereme Default Instance a necháme výchozí název SQL serveru „MSSQLSERVER

    07-default-instance

    Proběhne kontrola volného místa na disku. Musí být opět úspěšná a zkontrolujeme uvedené hodnoty

    Nyní zadáme účty, pod kterými se budou spouštět služby SQL serveru a Collation

    10-sql-accounts-upraveno-2

     

    Přepneme se z účtů na záložku Collation a zkontrolujeme nastavení.

    POZOR! Nastavení Collation musí být „SQL_Latin1_General_CP1_CI_AS“.

    11-collation-2

    Vybereme Windows Authentication Mode a dole přidáme uživatele NYMSA-testsccmadmin. Můžeme přidat i doménového administrátora (nymsa-testadministrator).

    Záložky Data Directories a FILESTREAM nebudeme měnit.

    12-authentication-2

    V okně Analysis Services opět přidáme uživatele NYMSA-Testsccmadmin

    13-analysisservices-2

    Opět se rozhodneme, zda budeme posílat data o provozu společnosti Microsoft

    Proběhne další kontrola, tentokrát již poslední.

    Objeví se rekapitulace nastavení pro instalaci a konečně potvrdíme Install

    Po nainstalování se nám zobrazí závěrečná zpráva a u všech instalovaných částí musí být Success

    14-poslednikontrola

    Tím jsme dokončili instalaci SQL serveru. Nyní ještě provedeme základní nastavení.

     

     

    Nastavení omezení RAM pro SQL server

    Spustíme SQL Management Studio, vybereme Memory a zadáme limit pro minimum i maximum 8192 MB. Pokud budeme v SCCM ukládat mnoho informací, tak maximum může být více. Pro testovací účely nebo školu by to mělo stačit.

    16-sql-server-memory-2

     

    Možný problém s přihlášením na SQL server

    Po spuštění SQL Management Studia se objeví následující okno. SQL server jsme instalovali s tím, že jsme zadali Windows Authentication. Proto ji musíme zde nechat nastavenou. Automaticky se vyplní uživatel, pod kterým jsme přihlášeni. Uživatele bohužel nejde přepnout. My jsme zadali doménového uživatele sccmadmin.

    17-sql-login

    Pokud nyní klikneme na Connect, tak přihlášení selže.

    17-sql-login-failed

    Řešení tohoto problému jsou celkem tři.

    1)      Na server se přihlásíme pod doménovým účtem nymsa-testsccmadmin. Tento účet patří do skupiny Domain Admins a tím má administrátorská práva i na tomto serveru. SQL Management Studio si potom převezme toto přihlášení.

    2)      Stiskneme Shift, klikneme pravým tlačítkem na ikonu SQL Management Studia a vybereme Run as Different User. Nyní již máme možnost zadat našeho doménového uživatele NYMSA-testsccmadmin.

    3)      Pokud se budeme chtít na server přihlašovat lokálním administrátorem, můžeme mu přidat práva na SQL server. Postupujeme níže zvedeným způsobem

     

     

    Přidání uživatele s admin právy do SQL serveru.

    Přihlásíme se prvním nebo druhým způsobem a spustíme SQL Management Studio. Rozklikneme Security a Logins. Na Logins klikneme pravým tlačítkem a vybereme New Login.

    18-sql-logins-2

    Ponecháme Windows Authentication a vybereme Search

    19-login-new-2

    Klikneme na Advanced

    20-login-select-2

    Nyní vybereme Find Now

    21-login-findnow-2

    Nalistujeme, vybereme a potvrdíme lokálního administrátora. Opět potvrdíme a dostaneme se do okna Login – New.

    22-lokalniadmin-2

    Jsme zpátky v okně Login – New. Zde si klikneme na Server Roles a vybereme public a sysadmin.

    23-serverrole-2

    Přepneme se do záložky Securables a zkontrolujeme, jestli máme u Connect SQL ve sloupci Grantor zapsáno sa a sloupec Grant je potvrzen.

    24-securables-2

    Potvrdíme a ukončíme SQL Management Studio.

    Nyní již můžeme spustit SQL Management Studio pod lokálním administrátorem.

    Na závěr můžeme provést aktualizaci nainstalovaného SQL serveru.

    Karel Nymsa – MIE Expert

    Gymnázium Jaroslava Žáka Jaroměř

    Domingo Surpresa – Ficando por dentro das novidades: .NET Blog e The week in .NET

    $
    0
    0

    dotnet-logo3

    Olá leitores do blog Wiki Ninjas Brasil!

    Sejam muito bem-vindos a mais um Domingo Surpresa.

    Espaço mantido pelos times de engenharia da plataforma .NET, o .NET Blog é um canal público com informações de interesse geral e dicas de desenvolvimento:

    https://blogs.msdn.microsoft.com/dotnet

    Uma das colunas regulares deste blog é a The week in .NET, com anúncios úteis para todos aqueles que queiram saber mais sobre a evolução do .NET e das diversas tecnologias que integram o mesmo. Caso deseje rever alguma destas postagens acesse:

    https://blogs.msdn.microsoft.com/dotnet/tag/week-in-net/

    E como o assunto de hoje é .NET, não deixe também de acompanhar os diversos artigos sobre desenvolvimento já publicados no TechNet Wiki:

    https://social.technet.microsoft.com/wiki/pt-br/

    E por hoje é isso… Até a próxima!

       

    Wiki Ninja Renato Groffe (MVP, Wiki, Facebook, LinkedIn, MSDN)

    AAD Application Proxy In The Azure Portal

    $
    0
    0

    Following on from last post, I’ll continue with the app story, but this time via the Azure Active Directory Application Proxy. The app proxy allows you to publish on-prem web apps, while leveraging the identity security benefits that Azure Active Directory provides. For OEM partners that are looking at ways to integrate traditional in house web apps with new cloud capabilities, the AAD App Proxy makes this process very easy.

    Figure 1: The initial steps for setting up the AAD App Proxy include choosing Enterprise Applications within Azure Active Directory, and then clicking Application Proxy

    Figure 2: Next we need to choose Download Connector

    Figure 3: From the server where we want to run the connector we run setup.

    Figure 4: The only configuration we need to perform on the server is signing in to our Azure AD Global Admin account.

    Figure 5: Switching back to the Azure Portal, choose Add an application, and then populate the Add your own on-premises application

    Figure 6: Once the new app has been added, we can make some customisations, including enabling the app and choosing a logo, amongst others.

    Figure 7: Next we should add a test user or group, and we do this via Add Assignment, Users and Groups, and Invite.

    Figure 8: Signing in to myapps.microsoft.com we can see that internalapp is now published to Admin

    Figure 9: Clicking on internalapp opens up a new tab, where you can see the msappproxy.net URL and the successfully loaded web page from the internal server we published


    Learn how to build the next generation of intelligent apps at this free Microsoft AI Workshop

    $
    0
    0

    The Microsoft AI Immersion Workshop is being held on Tuesday, May 9th, at the W Hotel in Seattle. If you are a developer interested in creating the next generation of intelligent apps and enterprise-grade solutions using the latest AI and machine learning techniques, this free workshop is for you. This is an in-person event, and capacity is limited, so register now to reserve your spot.

    Register

    The Workshop includes a keynote talk covering the breadth and depth of Microsoft’s AI investments and offerings, followed by five deep technical tutorials on the topics below. Seasoned software engineers and data scientists from Microsoft – people who are building some of the world’s most advanced AI and ML technologies – will run these hands-on tutorials.

    1. Applied Machine Learning for Developers.
    2. Big AI – Applying Artificial Intelligence at Scale.
    3. Weaving Cognitive and Azure Services to Provide Next-Generation Intelligence.
    4. Deep Learning and the Microsoft Cognitive Toolkit.
    5. Building Intelligent SaaS Applications.

    AllImmersion

    Tutorials will focus on hands-on projects and session abstracts and instructor names are available from the original post here. We hope to see many of you at this event in Seattle!

    SQL Server blog team

    Effective cost saving in Azure without moving to PaaS

    $
    0
    0

    azurepaas

    christaylorBy Chris Taylor, DevOps Engineer at DevOpsGuys

    Recently we were asked if we could help a company reduce the costs of their infrastructure with Microsoft Azure. They wanted to move rapidly, so working on a plan to re-architect their system to make use of the Platform-as-a-Service (PaaS) features that Azure provides wasn’t a viable option – we had to find another way.

    Naturally, the first recommendation we would make would be to resize the servers in all non-Production environments so the compute charges would come down, however this wasn’t an option either as they used these environments to gauge performance of the system before rolling changes to Production.

    Using the Azure Automation feature to turn VMs on and off on a schedule

    We noticed that they had non-Production environments (development, test, UAT and Pre-Production) that were not used all the time. However, they were powered on in Azure all the time, which meant they were continually getting charged for compute resource. Most of the people who would be using these environments were based in the UK, so the core hours of operation were between 7 am and 10 pm GMT (accommodating all nightly processes).

    Using this knowledge, and our knowledge of PowerShell and Azure automation, we wrote a custom run book that would turn off all servers in each environment at a given time, and then another run book that did the opposite – turn them back on again.

    We wanted to use best practices when creating the code, so hard coding values were out and parameters and tags were in: parameters were used to pass in a resource group, and tags were used to identify servers that were to be turned off. Tags were especially useful for doing this, as in some resource groups we couldn’t power down all the servers.

    Below is some of the PowerShell used to turn the VMs off:

    workflow Stop-ResGroup-VMs {
    param (
        [Parameter(Mandatory=$true)][string]$ResourceGroupName
    )
        #The name of the Automation Credential Asset this runbook will use to authenticate to Azure.
        $CredentialAssetName = 'CredentialName'
        $Cred = Get-AutomationPSCredential -Name $CredentialAssetName
        $Account = Add-AzureAccount -Credential $Cred
        $AccountRM = Add-AzureRmAccount -Credential $Cred
        Select-AzureSubscription -SubscriptionName "MySubscription"
        #Get all the VMs you have in your Azure subscription
        $VMs = Get-AzureRmVM -ResourceGroupName "$ResourceGroupName"
     
    if(!$VMs) {
        Write-Output "No VMs were found in your subscription."
    } else {
            Foreach -parallel ($VM in $VMs) {
                if ($VM.Tags['server24x7'] -ne 'Yes') {
                    Stop-AzureRmVM -ResourceGroupName "$ResourceGroupName" -Name $VM.Name -Force -ErrorAction SilentlyContinue
                }
            }
        }
    }
    

    The ability to hide secret variables within the automation blade was useful too, as we needed somewhere safe to store credentials that were to be used to connect to the subscription.

    Once authored and pushed to Azure, the run books had schedules setup as follows:

    • For development and test, the servers would turn off at 10pm and on again at 7am on weekdays, leaving them off all weekend.
    • For UAT, the servers would turn off at midnight on the weekend, and back on again Monday at midnight
    • For Pre-Production, the servers would turn off at 10 pm every day (and off at weekends), however they were configured not to start back up again. We decided to make turning it back on an on-demand feature via TeamCity (their CI system). Turning it back off at 10 pm meant it was never online when not needed.
    • For the CI agents; These followed the same pattern as development and test servers, as we didn’t want the CI server to try and deploy changes to servers that were not switched on.

    Alerts were set up so that if these scripts failed, someone was alerted and could act on it. If they failed for several days, the cost saving wouldn’t have been very good!

    Using the above proved very effective. As an example (and using the Azure pricing calculator) if you had a standard Windows D12 VM without additional disks, leaving this on all month (744 hours) would cost around £322.17. If you used a schedule to turn it on and off again like the above, it would bring the cost down to £140.73 – well over a 50% saving. Imagine that over hundreds of servers and it mounts up to a hefty saving.

    They were very happy about this as they could see the cost coming down straight away when they went to view billing details via the portal.

    Using the Azure Advisor to guide us on servers that might be underutilised

    A useful tool that has been recently released in Azure is the Advisor blade.

    advisor

    This is designed to look over the resources you have in a subscription, and give you recommendations around areas such as under-utilisation and cost, availability, performance and security. We decided this would be useful to run against the subscription, so we logged into the portal and switched the blade on. This registers the Advisor with the subscription, and kicks off the process straight away.

    It took about 5 minutes in total for it to work its magic and then the results were ready.

    One of the best features of the Advisor tool is that it continually works in the background like a virtual consultant looking at your resources. Recommendations are updated every hour so you can go in and search through the content to see what it says. If you choose not to act on a recommendation, you can snooze it so it doesn’t appear under the active list. The blade is extremely easy to use and navigate to.

    The first time it ran, it had identified several VMs that were either underutilised or not used at all, and the recommendation was to either resize, turn off or remove completely. Part of what is presented back is a potential cost saving by doing something with the VMs – this allowed us to very quickly generate a rough total of what could be saved.

    After a quick check with the company’s IT director to discuss the highlighted servers, we found they were in fact not being used but had been provisioned for a project that wasn’t started. Having the cost on the screen showed them instantly what could be saved and the decision was made to turn the machines off in the meantime.

    The Advisor was a great tool that can be turned on and allowed to look over the resources in a subscription before generating feedback without a human having to click through numerous screens in the portal, or write any code. Certainly a must if you have lots of departments who can access your subscription to create resources who might not be particularly good at cleaning up afterwards!

    Conclusions

    There are many other ways to go about cost savings in Azure, however if you have a limitation around what you can do then using automation and the Advisor can help.

    The Advisor is a tool we’d especially recommend to gain insight into how your resources are being used – not just for cost. We trialled a few other 3rd party tools that either only worked against an Enterprise subscription (which they didn’t have), or we found that they just plain didn’t work. The Azure Advisor was certainly the best of the bunch and the least painful to use.

    Chris Taylor is a DevOps Engineer at DevOpsGuys. The DevOpsGuys are a DevOps consultancy who are leading the charge for WinOps. CEO and CTO James Smith and Steve Thair co-founded WinOps London with Alex Dover and Sam Gorvin from Prism Digital.

    For more on Azure and DevOps methodologies, check out the WinOps meetup. Every month, like-minded people join together to listen to talks and discuss the use of DevOps methodologies on Windows stack. If you or your company would be interested in hosting a meetup for between 100 and 200 people, get in touch with the organisers.

    For more information, including sponsorship and speaking opportunities, contact sponsorship@winops.org. Follow the meetup on Twitter @WinOPsLDN and meetup.com/winops.

    Azure Partner Community: Microsoft Azure Stack – Blurring the lines between private, public, and hybrid clouds

    $
    0
    0

    Willie Maul, Technology Solutions Professional, Microsoft Azure

    Microsoft differentiates itself from competitors by offering private, public, and hybrid clouds. Combining the expertise gained from running IaaS, PaaS, and SaaS services with a history of providing on-premises solutions, Microsoft is able to develop solutions with hybrid cloud in mind unlike any other company. Examples of this are SQL Server Stretch Database and Microsoft Azure StorSimple, in which on-premises solutions have built-in capabilities to automatically scale by utilizing Microsoft Azure.

    Microsoft is taking things a step further with Microsoft Azure Stack, which blurs the line between public, private, and hybrid clouds by extending Azure into customers’ datacenters. Azure Stack is not a different version of Azure, but provides the same portal, APIs, application model, and tooling as Azure. It’s this month’s topic for the Azure Partner Community call and blog series.

    Sign up for the April 13 Azure Infrastructure Partner call

    Microsoft Azure Stack overview

    The concept of the hybrid cloud has been around for years. There are several reasons, such as regulatory and latency concerns, why a company would not be able to move its entire infrastructure to the cloud. Hybrid cloud does come with challenges, though. For example, how to develop and architect applications to seamlessly work both on-premises and in the cloud. IT resources must have knowledge of both cloud technology and on-premises infrastructure. Also, hybrid cloud doesn’t always provide the speed, flexibility, and scale of public cloud.

    Learn more about Microsoft Azure Stack

    Microsoft Azure Stack Overview

    How Azure Stack solves common hybrid cloud challenges

    One application model

    • Write to the same Azure APIs
    • Write your application with Azure services, and deploy either on Azure or Azure Stack depending on business needs, regulations, and policies

    Developer and IT consistency

    • Developers can build applications using a consistent set of Azure services and DevOps processes and tools
    • Developers can speed up new cloud application development by using prebuilt solutions from the Azure Marketplace, including open source tools and technologies
    • Use Azure Resource Manager to build reusable application templates for traditional and cloud-native apps
    • Use role-based access control in Azure Resource Manager and Azure Active Directory to enable fine-grained access to application resources

    Gain the efficiencies of public cloud

    • Customers can quickly provision services through the portal just as they would in Azure
    • Automatically scale out on-premises or to Azure as needed
    • Increase productivity by utilizing PaaS services and Azure marketplace solutions on-premises
    • Realize cost benefits with a pay-as-you-use pricing model

    The partner opportunity

    Microsoft Azure Stack presents new opportunities to assist your customers with their digital transformation plans. Customers that choose hybrid cloud environments should have the same flexibility and innovation capability that cloud customers have. Azure technologies are now accessible on-premises with Azure Stack, and organizations can now modernize their applications across hybrid cloud environments.

    Resources

    Get hands-on with Azure Stack and learn more on the April 13 partner call

    Azure Stack Technical Preview 3 is available for download now. Read the March 1 announcement for a look at what’s new in Azure Stack TP3, descriptions of hybrid use cases for Azure and Azure Stack, and a roadmap update. Then, sign up for the April 13 Azure Infrastructure Partner call for an in-depth discussion about Azure Stack.

    Sign up for the April 13 Azure Infrastructure Community Call

    Azure Infrastructure and Management Partner Community

    Join the Azure Partners call on April 13     Download the Cloud Infrastructure Playbook     Microsoft Azure Stack overview

    Advanced Threat Analytics プレイブックを使って攻撃をシミュレーションし検出する方法

    $
    0
    0

    本記事は、Microsoft Advanced Threat Analytics Team のブログHow to simulate and detect attacks with the Advanced Threat Analytics Playbook” (2017 2 23 日 米国時間公開) を翻訳したものです。


    Advance Treat Analytics (ATA) チームが受け取ったフィードバックで大きな割合を占めたのは、攻撃をシミュレーションし ATA がそれを検出する方法を明確かつ簡単に確認する手順を求めるものでした。

    その声に応えるために、次を含むプレイブックを作成しました。

    1. 現実世界の高度な攻撃シナリオで使用されているさまざまなテクニックをシミュレーションするための詳細な手順。
    2. 最初の偵察からドメイン支配まで、完全な攻撃キャンペーンのウォークスルー。
    3. ATA による疑わしいアクティビティの検出のウォークスルー。

    ATA 攻撃シミュレーション プレイブックをダウンロードする

    ATA が検出できるすべての攻撃をプレイブックで網羅しているわけではありません。ATA による検出には、学習期間が必要なものもあります。簡素化のため、プレイブックでは学習期間を要するテクニックのシミュレーション方法については提供していません。

    Tech Community では、プレイブックの次バージョンに向けてご意見やテクニックの提案をお待ちしています。

    Skype for Business によるデジタル トランスフォーメーションの推進

    $
    0
    0

    (この記事は 2017 3 27 日に Office Blogs に投稿された記事 Skype for Business drives digital transformation の翻訳です。最新情報については、翻訳元の記事をご参照ください。)

    今回は、Office 365 マーケティング担当コーポレート バイス プレジデントを務める Ron Markezich の記事をご紹介します。

    Office 365 は、グループごとに異なるワークスタイルに対応すべく設計された汎用的な共同作業用ツールキットであり、月間 8,500 万人を超えるアクティブ ユーザーを擁します。メール機能を提供する Outlook、インテリジェントなコンテンツ管理を実現する SharePoint、社内全体のネットワークを実現する Yammer、チャット ベースの迅速なチーム作業を可能にする Microsoft Teams など、各種機能が統合されており、その中で、エンタープライズ向けの音声/ビデオ会議のバックボーンとして機能するのが Skype for Business です。

    コミュニケーションや共同作業は業務を進めるうえで不可欠となっており、お客様は会議や通話へのさまざまなニーズを満たすために Office 365 プランの Skype for Business に注目しています。Skype ネットワーク上で行われている会議の数は、全世界で年間 10 億件を超え、Skype for Business Online の利用は過去 1 年で倍増しています。

    今回、統合コミュニケーション業界の年次カンファレンスである Enterprise Connect がオーランドで開催されるのに伴い、マイクロソフトは Office 365 における Skype for Business の新たな機能強化やパートナー ソリューションについて発表する運びとなりました。これは、マイクロソフトの目標である、コミュニケーションを主軸とした生産性向上を Skype で可能にする大きな一歩となります。

    • Skype for Business クラウド PBX で、自動応答コール キューイングという 2 つの新たな通話機能が利用できるようになりました。
    • 新たな Skype for Business Call Analytics ダッシュボードのプレビュー版が利用できるようになりました。これにより、IT 管理者は状況をより詳細に把握して、通話に関する問題の特定や対処を行うことができます。
    • パートナーから新しい会議室ソリューションが提供されることになりました。Polycom RealConnect for Office 365 では、お客様が既存のビデオ会議デバイスを Skype for Business Online 会議に接続できます。また、新しい Crestron SR for Skype Room Systems では、Crestron のコントロールおよび AV システムとのシームレスな統合が可能です。
    • Skype for Business Online の初めてのアテンダント コンソールとして、Enghouse Interactive TouchPoint Attendant が利用できるようになりました。

    「Skype for Business Online は、今や当社の DNA の一部となりつつあります」
    —J. Walter Thompson Europe、リージョナル IT ビジネス パートナー、Menakshi Sehwani 氏

    完成度の高いエンタープライズクラスのコミュニケーション ソリューション

    このたび、Skype for Business クラウド PBX で、自動応答とコール キューイングという 2 つの高度な通話機能が新たに利用できるようになりました。自動応答では、ダイヤル パッド入力と音声認識を利用して、着信通話への応答とルーティングを自動で行えます。コール キューイングでは、着信通話を、次に対応可能な担当者に着信順にルーティングすることができます。

    この通話サービスに関しては、過去半年間で以下のような技術革新が急速に進められており、今回の機能強化もその一環として行われています。

    • iOS CallKit (英語) の統合
    • Skype for Business の Mac 用クライアント
    • PSTN 会議の提供を 90 か国以上に拡大し、180 か国へのダイヤルアウトを可能に
    • PSTN 通話をフランス、スペイン、英国に拡大すると共に、プレビュー版をオランダで提供
    • ハイブリッド展開により数千ユーザーに対応
    • Skype for Business Server Cloud Connector エディションによって、オンプレミスのテレフォニー資産をクラウドの音声ソリューションに接続可能に

    企業の従来の会議システムや電話システムを Skype for Business に切り換えることで、社員が会議に参加できるようになると共に、通話の受発信や管理を、デバイスを問わずに Office 365 で直接行えるようになります。また、Skype for Business クラウド PBX を利用すれば、Office 365 管理コンソールで一元管理が可能となり、IT 管理者はメール、コンテンツ、共同作業のほか、コミュニケーションについてもシームレスな管理ができるようになります。

    IT の管理と制御をより簡単に

    今回は、Skype for Business Online Call Analytics のプレビュー版についても発表されました。この Office 365 管理コンソールの新しいダッシュボードを利用すれば、IT 管理者は状況をより詳細に把握して、ネットワークの問題やヘッドセットの不具合など、ユーザーの通話にまつわる問題を特定し、対処することができます。お客様によると、クラウド ベースのコミュニケーションに移行したことによる最大のメリットは、あらゆる会議システムや通話システムを単一のソリューションに統合し、プロビジョニングや管理を合理化できたことだと言います。また、お客様からは、ユーザー サポートにおける問い合わせに対処するため、通話データをより詳細に把握できるようにしてほしいとの声が寄せられていました。Call Analytics なら、豊富なテレメトリ データをリアルタイムで提供することで、IT 管理者によるトラブルシューティングを支援し、ユーザー エクスペリエンスを向上できます。

    Call Analytics ダッシュボードをはじめとする IT 管理機能のほかにも、Skype for Business Online のセキュリティを強化するための新たな認証機能が利用できるようになりました。PowerShell の多要素認証、証明書ベースの認証、さらに、クライアント、会議、モビリティに関するカスタム ポリシー (英語) などが挙げられます。

    「Henkel の IT 部門では未来のデジタル世界を実現したいと考えています。Skype for Business クラウド PBX などの機能を使えば、それが可能です」
    —Henkel、統合ビジネス ソリューション担当コーポレート ディレクター、Markus Petrak 氏

    会議室の効率を向上

    どこにいても、参加者全員にとって会議の効率性と魅力を最大限に高めるためには、Web 会議やビデオ会議で、画面共有、IM、ホワイトボードといった機能を利用することが不可欠です。同時に、企業では、Skype for Business のあらゆる機能を利用しながら既存の会議資産も活用したいと考えています。このたび、Polycom では、クラウド ベースのビデオ相互運用サービスである RealConnect for Office 365 (英語) の一般提供を北米で 4 月より開始することを発表しました。RealConnect サービスでは、低い所有コストで既存のビデオ会議 (VTC) デバイスを Skype for Business Online に接続することができ、IT 部門によるプロビジョニングが容易なほか、ユーザーの利用も簡単です。

    Polycom の CEO を務める Mary McDowell 氏は「Polycom RealConnect for Office 365 は、Skype for Business Online のユーザーを他のビデオ システム利用者とつなぐことで、ビデオの世界をシンプルにしてくれます。このクラウド サービスでは Skype for Business 会議に 1 クリックで参加できるので、既存のビデオ システムへの投資を保護できます」と語ります。

    さらに今回は、Crestron から SR for Skype Room Systems (英語) ソリューションが提供されることになりました。次世代の Skype Room Systems である Crestron SR は、完全にネイティブな Skype for Business エクスペリエンスを提供すると共に、Crestron のコントロールおよび AV システムとシームレスに統合するよう新たに設計されています。こうした Skype Room Systems ソリューションでは、優れた音声や高品質ビデオ、それに室内でのコンテンツ共有が可能となるため、あらゆる規模の会議室に変革をもたらします。離れた場所にいる参加者が、手早く簡単に会議に参加したり電話をかけたりすることができます。2016 年 10 月リリースの Logitech SmartDock を利用しているお客様は、すでにメリットを実感しています。

    「IT 面での当社の取り組みを成功させるには、ユーザー側での導入が不可欠です。Skype Room Systems が搭載された Logitech SmartDock を利用することで、ビデオを介した共同作業が簡単に行えるようになります。手頃な価格なので、従来のビデオ会議室 1 部屋分の値段で複数の会議室を設けることができます」
    —Morgan Franklin Consulting、IT ディレクター、Franzuha Byrd 氏

    Skype for Business のビジネス ソリューション

    Skype for Business によって Office 365 全体にわたってコミュニケーションが促されるのと同じように、パートナーやお客様が Skype for Business 向けの API や SDK を用いてカスタム アプリを作成することにより、基幹業務アプリケーションやエンタープライズ向けソリューションにリアルタイムのコミュニケーション機能が実装されることになります。

    マイクロソフトは HIMSS で、Skype for Business App SDK (英語)Office 365 Virtual Health Templates (英語) の提供を発表しました。そして今回は、Enghouse によって、Skype for Business Online 向けの初めてのアテンダント コンソールの 1 つである TouchPoint Attendant が提供されることになりなりました。

    Enghouse では、Skype for Business を利用して、新しいアテンダント コンソールによってお客様からの着信通話を効率的にルーティングします。また、Smartsheet (英語) では、自社の共同作業管理プラットフォームに Skype for Business を組み込んでいます。このように、Skype for Business はカスタム コミュニケーションのバックボーンとして活用されています。

    Enterprise Connect にご参加ください

    Office 365 は、コミュニケーションと共同作業のための、市場で最も包括的で高度なツールキットであり、世界各国にわたるチームや個人ユーザーの多様なニーズに対応します。Skype for Business は、会議、ビデオ、音声向けの単一のプラットフォームであり、Office 365 の中核機能として、チームや個人がドキュメントやアイデアを短期間で構築、作成、生成できるよう後押しします。今回、マイクロソフトのオンプレミス向けおよびクラウド向けの包括的なプラットフォームの一環として、生産性向上と管理の簡易化を可能にする新たな革新技術について皆様にお知らせでき、たいへん嬉しく思っています。

    2017 年 3 月 29 日 (水) 午前 10 時 (東部夏時間) (同日午後 11 時 (日本時間)) に開催される Enterprise Connect に、ぜひご参加ください。マイクロソフトの基調講演で、マイクロソフトが最新のコミュニケーション機能や共同作業機能を通して個人、IT 部門、企業を支援することで、お客様のデジタル トランスフォーメーションをいかに後押ししているかをご紹介いたします。

    —Ron Markezich

    ※ 本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

    Viewing all 36188 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>