Quantcast
Channel: TechNet Blogs
Viewing all 36188 articles
Browse latest View live

Bing Maps – Distance Matrix API

$
0
0

By Jamie Maguire, Development Consultant at Grey Matter

Jamie Maguire is a Software Architect with 15+ years’ experience architecting and building solutions using the .NET stack. He is a Bing Maps and AI Development Consultant for Grey Matter, the EMEA & APAC distributor for Bing Maps and Microsoft Gold Partner.

Introduction

Microsoft have been busy, working hard on the Bing Maps v8 Web Control and adding rich new functionality.  One of the more recent additions to the Bing Maps ecosystem is the Distance Matrix API.  

The API was released at Microsoft Ignite in October and allows you to generate travel time and distances (with the help of the Bing Maps Route API) for a given set of origins and destinations.   

It can also factor in predictive traffic information when generating times thereby allowing you to avoid any potential delays.  Some of the other features include, but are not limited to:

  • Calculating 1 days’ worth of travel in 15 minute intervals
  • Support for multiple transport modes
  • Support for GET and POST requests
  • Asynchronous support (for larger requests)
  • JSON and XML support
  • Option to cache results for up to 72 hours

In this blog post:

  • we introduce the Bing Distance Matrix API  
  • identify some of the applications  
  • discuss how the Bing Distance Matrix API helps solve real world problems  
  • walk through some sample code  

First, a few concepts:

Distance Matrix

A distance matrix is a 2d array that contains N origins and N destinations.

Way Point

A geographical location defined by longitude and latitude used for navigational purposes.

Travel Mode

This is the mode of travel for a given route. This can be Driving, Public Transit or Walking.

 

Applications

A distance matrix has numerous applications in routing and fleet management and can be applied to many sectors such as Retail, Logistics, Manufacturing and Property / Real Estate.   

Implementing a distance matrix allows you to:

  • determine arrival times based on start times
  • sort search results by their travel distance or time
  • calculate the different in commuter times between locations
  • cluster data based on travel time/distance. E.g. Find me all restaurants within a 3-mile radius of the property you’re about to purchase.
  • …and much more!

These features can be important if you need to consider time windows, have multiple pickup and delivery locations or split deliveries.

One of the most common applications of the distance matrix is to help power algorithms related to logistics problems, specifically the Vehicle Routing (VRP) and Travelling Salesman Problems (TSP) (route optimisation). 

 

The Traveling Salesman Problem (TSP)

The travelling salesman problem, which was first formulated in 1930, asks the following question:

 

"Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?"

 

A problem like this has many variables, and it’s out-with the scope of this blog post to go into mathematics, linear programming and computation required to arrive at such answers. An in-depth explanation of the TSP can be found here if you’re interested.

 

How does the Distance Matrix API to solve this problem?

The Bing Maps Distance Matrix API shields you from the complexity of this problem and helps solve it for you with ease.   

By encapsulating complex algorithms in easy to use endpoints, you can quickly develop solutions in a language of your choice and integrate TSP solutions with your existing tech stack or business application.

You supply a set of Origins, Destinations and Travel Mode, invoke the Distance Matrix API, and a distance matrix will be returned that contains Travel Distances and Travel Durations.   

Armed with this data you can then identify an optimum route for your fleet or sales team.

 

An Example

In this example, we have a salesperson that has several client meetings across the UK. They're from London and they need to visit the following places in one day:

  • Leeds
  • York
  • Liverpool

A solution that leverages the Bing Distance Matrix API can help optimise our journey times by prompting the salesperson as to which clients are closest.

In the following screen shot, a request has been constructed in Postman with the following parameters:

 

This request is then sent to the Distance Matrix API Endpoint and returns the following response which you can see in the code extract below:

{
    "authenticationResultCode": "ValidCredentials",
    "resourceSets": [
        {
            "estimatedTotal": 1,
            "resources": [
                {
                    "__type": "DistanceMatrix:http://schemas.microsoft.com/search/local/ws/rest/v1",
                    "destinations": [
                        {
                            "latitude": 53.7947998046875,
                            "longitude": -1.54653000831604
                        },
                        {
                            "latitude": 51.506420135498,
                            "longitude": -0.127210006117821
                        },
                        {
                            "latitude": 53.4100914001465,
                            "longitude": -2.9784300327301
                        }
                    ],
                    "errorMessage": "Request accepted.",
                    "origins": [
                        {
                            "latitude": 51.506420135498,
                            "longitude": -0.127210006117821
                        }
                    ],
                    "results": [
                        {
                            "destinationIndex": 0,
                            "originIndex": 0,
                            "travelDistance": 194.086344075461,
                            "travelDuration": 12104.9
                        },
                        {
                            "destinationIndex": 1,
                            "originIndex": 0,
                            "travelDistance": 0.0317589719333333,
                            "travelDuration": 36.8
                        },
                        {
                            "destinationIndex": 2,
                            "originIndex": 0,
                            "travelDistance": 210.211478785907,
                            "travelDuration": 13215.8
                        }
                    ]
                }
            ]
        }
    ],
    "statusCode": 200,
    "statusDescription": "OK",

The JSON is straight forward enough to read.  The key nodes are:

  • Origins
  • Destinations
  • Results

The Origin and Destination nodes are self-explanatory – these contain the starting point and destinations that are relevant to our salesperson.

Results Node

In the Results node however, you can see the travelDisance and travelDuration values have been calculated for each destination by the Distance Matrix API.   

As our fictitious salesperson completes each meeting, their mobile CRM tool, powered by the Distance Matrix API, can auto-suggest the next closest meeting, thereby allowing them to focus on value-add tasks as opposed to journey planning.

“Under the hood”, you can query this JSON in your application, sort and display the results in whichever way you see fit.  This just one way that a Bing Distance Matrix API can help businesses run more efficiently.

 

Another example – Property and Real Estate

Imagine for a minute that you’re working on a mobile application for a property / real estate business.  When searching for properties in the mobile application, users must be able to plan a potential commute from the property they’re currently viewing.  

With the Distance Matrix API, you can supply the latitude/longitude of the property and the respective destinations (office or train station for example) – along with the mode of travel (driving, public transit or walking) and the API will return data that allows you to determine the quickest routes.

You might also want to further enrich your datasets by introducing the Spatial Data Services API to identify Points of Interest Data (POI).

 

Predictive Intelligence

Another layer of intelligence can be applied by leveraging predictive traffic data which can help you further provide more accurate timing estimates.   

If this is something that you’re interested in, the Distance Matrix Histogram endpoint is something you should explore.

 

Summary  

In this blog post, we’ve looked at the Bing Distance Matrix API and explored some of the features and what’s possible with it. We’ve seen how easy it is to consume the Distance Matrix API using a tool like Postman.

Alternatively, if you’re interested in seeing how the API can be consumed using .NET, you can check out the Bing Maps REST Toolkit for .NET project on GitHub here.

Are you using Bing Maps in any of your solutions?

~~~

For reference:

You’ll need a Bing Maps Account and Key prior to making any requests, you can get one for free here.

The entire request that was sent to the Distance Matrix API endpoint via Postman:

https://dev.virtualearth.net/REST/v1/Routes/DistanceMatrix?origins=51.506420135498,-0.127210006117821&destinations=53.7947998046875,-1.54653000831604;51.506420135498,-0.127210006117821;53.4100914001465,-2.9784300327301&travelMode=driving&key=****YOUR%20BING%20KEY****&distanceUnit=mile&timeUnit=minutes">https://dev.virtualearth.net/REST/v1/Routes/DistanceMatrix?origins=51.506420135498,-0.127210006117821&destinations=53.7947998046875,-1.54653000831604;51.506420135498,-0.127210006117821;53.4100914001465,-2.9784300327301&travelMode=driving&key=****YOUR BING KEY****&distanceUnit=mile&timeUnit=minutes

You might also be wondering how to identify the latitude and longitude of each location, you can get this information by making a request to the following Endpoint:  

http://dev.virtualearth.net/REST/v1/Locations?q=YOUR_LOCATION">http://dev.virtualearth.net/REST/v1/Locations?q=YOUR_LOCATION.

Further Reading


Here’s what you missed – Five big announcements for Storage Spaces Direct from the Windows Server Summit

$
0
0

This post was authored by Cosmos Darwin (@cosmosdarwin), PM on the Windows Server team at Microsoft.

Yesterday we held the first Windows Server Summit – here are the five biggest announcements for Storage Spaces Direct.

Yesterday we held the first Windows Server Summit – here are the five biggest announcements for Storage Spaces Direct.

Yesterday we held the first Windows Server Summit, an online event all about modernizing your infrastructure and applications with Windows Server. If you missed the live event, the recordings are available for on-demand viewing. Here are the five biggest announcements for Storage Spaces Direct and Hyper-Converged Infrastructure (HCI) from yesterday’s event:

#1. Go bigger, up to 4 PB

With Windows Server 2016, you can pool up to 1 PB of drives into a single Storage Spaces Direct cluster. This is an immense amount of storage! But year after year, manufacturers find ways to make ever-larger* drives, and some of you – especially for media, archival, and backup use cases – asked for more. We heard you, and that’s why Storage Spaces Direct in Windows Server 2019 can scale 4x larger!

The new maximum size per storage pool is 4 petabytes (PB), or 4,000 terabytes.

The new maximum size per storage pool is 4 petabytes (PB), or 4,000 terabytes.

The new maximum size per storage pool is 4 petabytes (PB), or 4,000 terabytes. All related capacity guidelines and/or limits are increasing as well: for example, Storage Spaces Direct in Windows Server 2019 supports twice as many volumes (64 instead of 32), each twice as large as before (64 TB instead of 32 TB). These are summarized in the table below.

All related capacity guidelines and/or limits are increasing as well.

All related capacity guidelines and/or limits are increasing as well.

* See these new 14 TB drives – whoa! – from our friends at Toshiba, Seagate, and Western Digital.

Our hardware partners are developing and validating SKUs to support this increased scale.

We expect to have more to share at Ignite 2018 in September.

#2. True two-node at the edge

Storage Spaces Direct has proven extremely popular at the edge, in places like branch offices and retail stores. For these deployments, especially when the same gear will be deployed to tens or hundreds or locations, cost is paramount. The simplicity and savings of hyper-converged infrastructure – using the same servers to provide compute and storage – presents an attractive solution.

Since release, Storage Spaces Direct has supported scaling down to just two nodes. But any two-node cluster, whether it runs Windows or VMware or Nutanix, needs some tie-breaker mechanism to achieve quorum and guarantee high availability. In Windows Server 2016, you could use a file share (“File Share Witness”) or an Azure blob (“Cloud Witness”) for quorum.

What about remote sites, field installations, or ships and submarines that have no Internet to access the cloud, and no other Windows infrastructure to provide a file share? For these customers, Windows Server 2019 introduces a surprising breakthrough: use a simple USB thumb drive as the witness! This makes Windows Server the first major hyper-converged platform to deliver true two-node clustering, without another server or VM, without Internet, and even without Active Directory.

Windows Server 2019 introduces a surprising breakthrough – the USB witness!

Windows Server 2019 introduces a surprising breakthrough – the USB witness!

Simply insert the USB thumb drive into the USB port on your router, use the router’s UI to configure the share name, username, and password for access, and then use the new -Credential flag of the Set-ClusterQuorum cmdlet to provide the username and password to Windows for safekeeping.

Insert the USB thumb drive into the port on the router, configure the share name, username, and password, and provide them to Windows for safekeeping.

Insert the USB thumb drive into the port on the router, configure the share name, username, and password, and provide them to Windows for safekeeping.

An extremely low-cost quorum solution that works anywhere.

An extremely low-cost quorum solution that works anywhere.

Stay tuned for documentation and reference hardware (routers that Microsoft has verified support this feature, which requires an up-to-date, secure version of SMB file sharing) in the coming months.

#3. Drive latency outlier detection

In response to your feedback, Windows Server 2019 makes it easier to identify and investigate drives with abnormal latency.

Windows now records the outcome (success or failure) and latency (elapsed time) of every read and write to every drive, by default. In an upcoming Insider Preview build, you’ll be able to view and compare these deep IO statistics in Windows Admin Center and with a new PowerShell cmdlet.

Windows now records the outcome (success or failure) and latency (elapsed time) of every read and write.

Windows now records the outcome (success or failure) and latency (elapsed time) of every read and write.

Moreover, Windows Server 2019 introduces built-in outlier detection for Storage Spaces Direct, inspired by Microsoft Azure’s long-standing and very successful approach. Drives with abnormal behavior, whether it’s their average or 99th percentile latency that stands out, are automatically detected and marked in PowerShell and Windows Admin Center as “Abnormal Latency” status. This gives Storage Spaces Direct administrators the most robust set of defenses against drive latency available on any major hyper-converged infrastructure platform.

Windows Server 2019 introduces built-in outlier detection for Storage Spaces Direct, inspired by Microsoft Azure’s long-standing and very successful approach.

Windows Server 2019 introduces built-in outlier detection for Storage Spaces Direct, inspired by Microsoft Azure.

Drives with abnormal behavior are automatically detected and marked in PowerShell and Windows Admin Center as “Abnormal Latency” status.

Drives with abnormal behavior are automatically detected and marked in PowerShell and Windows Admin Center.

Watch the Insider Preview release notes to know when this feature becomes available.

#4. Faster mirror-accelerated parity

Mirror-accelerated parity lets you create volumes that are part mirror and part parity. This is like mixing RAID-1 and RAID-6 to get the best of both: fast write performance by deferring the compute-intensive parity calculation, and with better capacity efficiency than mirror alone. (And, it’s easier than you think in Windows Admin Center.)

Mirror-accelerated parity lets you create volumes that are part mirror and part parity.

Mirror-accelerated parity lets you create volumes that are part mirror and part parity.

In Windows Server 2019, the performance of mirror-accelerated parity has more than doubled relative to Windows Server 2016! Mirror continues to offer the best absolute performance, but these improvements bring mirror-accelerated parity surprisingly close, unlocking the capacity savings of parity for more use cases.

In Windows Server 2019, the performance of mirror-accelerated parity has more than doubled!

In Windows Server 2019, the performance of mirror-accelerated parity has more than doubled!

These improvements are available in Insider Preview today.

#5. Greater hardware choice

To deploy Storage Spaces Direct in production, Microsoft recommends Windows Server Software-Defined hardware/software offers from our partners, which include deployment tools and procedures. They are designed, assembled, and validated against our reference architecture to ensure compatibility and reliability, so you get up and running quickly.

To deploy in production, Microsoft recommends these Windows Server Software-Defined partners. Welcome Inspur and NEC!

To deploy in production, Microsoft recommends these Windows Server Software-Defined partners. Welcome Inspur and NEC!

Since Ignite 2017, the number of available hardware SKUs has nearly doubled, to 33. We are happy to welcome Inspur and NEC as our newest Windows Server Software-Defined partners, and to share that many existing partners have extended their validation to more SKUs – for example, Dell-EMC now offers 8 different pre-validated Storage Spaces Direct Ready Node configurations!

Since Ignite 2017, the number of Windows Server Software-Defined (WSSD) certified hardware SKUs and the number of components with the Software Defined Data Center (SDDC) Additional Qualifications in the Windows Server catalog has nearly doubled.

Since Ignite 2017, the number of Windows Server Software-Defined (WSSD) certified hardware SKUs and the number of components with the Software Defined Data Center (SDDC) Additional Qualifications in the Windows Server catalog has nearly doubled.

This momentum is great news for Storage Spaces Direct customers. It means more vendor and hardware choices and greater flexibility without the hassle of integrating one-off customizations. Looking to procure hardware? Get started today at Microsoft.com/WSSD.

Looking forward to Ignite 2018

Today’s news builds on announcements we made previously, like deduplication and compression for ReFS, support for persistent memory in Storage Spaces Direct, and our monthly updates to Windows Admin Center for Hyper-Converged Infrastructure. Windows Server 2019 is shaping up to be an incredibly exciting release for Storage Spaces Direct.

Join the Windows Insider program to get started evaluating Windows Server 2019 today.

We look forward to sharing more news, including a few surprises, later this year. Thanks for reading!

- Cosmos and the Storage Spaces Direct engineering team

Finally Remove NTLMv1 with Project VAST

$
0
0

Are you old enough to remember parachute pants, VCRs, and boom boxes? How about the Mosaic browser, Banyan VINES, and Token Ring networking? Do you still use any of these things? Probably not. But chances are your organization uses a protocol that is equally old.

You wouldn’t wear leather armor on a modern battlefield. And you shouldn’t expect 25-year-old technology to stand up to a six-month-old attack technique.

Hey, it’s Jon once again, with this month’s installment about Project VAST (the Visual Auditing Security Tool). In this edition, we need to talk about the elephant in the room; we need to talk about NTLM and what you can do about this ancient and deprecated protocol. Yes, this protocol is probably in your environment and yes it is a problem. But with some diligent work and some help from Project VAST, you can deal with it effectively.

Quick Review: What is NTLM

Once upon a time, before Active Directory (AD), before Windows 2000, before Microsoft’s implementation of Kerberos, there was NTLM (okay, I’ll stop with the reminiscing 😊 ). NTLM stands for NT Lan Manager, a suite of Microsoft protocols used for authentication and integrity. You may know NTLM as a challenge-response protocol. By today’s standards, an NTLM challenge-response is really pretty simple:

-          Client sends user name to resource server in plain text

-          Resource server generates a nonce or random number, and sends it to the client

-          Client encrypts the nonce with the hash of the user’s password (which it has claimed into memory at logon) and returns it to the server

-          The server proxies the username, the challenge it sent to the client, and client response to the challenge, to the DC (or other authoritative server) for confirmation

-          The DC/authority retrieves the hash from its local Security Accounts manager (SAM) database

and uses it to encrypt the challenge it received from the server

-          The DC/authority compares the new challenge it created against the one from the client; identical challenges result in Authorization to the resource server

(Ref: MSDN at https://msdn.microsoft.com/en-us/library/windows/desktop/aa378749(v=vs.85).aspx)

Notice anything here? NTLM was particularly innovative and effective, in its heyday, because it never transmitted the user’s password or its hash across the network, where it could be easily stolen.

Why is NTLM still in use if we have Kerberos?

Good question. Though NTLM has been largely replaced by Kerberos in AD (and several other protocols for Internet-based authentication), NTLM is still in widespread use because it fills in some gaps where Kerberos is not possible. Kerberos relies upon a trusted-third-party scenario; in AD, this third-party is the Key Distribution Center (KDC) portion of a DC. But suppose you have an old application that can’t support Kerberos, or that you have a member server that lives outside of your AD forest, or that you need to call a resource in a manner unsupported by a Service Principal Name (SPN), such as an IP address. In these scenarios, you need something else and that something else is generally NTLM.

OK, so what’s the problem?

Well done – you’re asking all the right questions.  😊  The problem is multi-faceted. First, NTLMv1 hashing (or the mathematical value used to represent its password) is based upon the Data Encryption Standard (DES) symmetric-key encryption algorithm. DES was considered secure when it was invented and when NTLMv1 used it. But given today’s graphical processing units and freely-available password cracking tools, DES is easily broken.

As easy as NTLM is to crack, that’s not its biggest weakness. Its biggest weakness is its vulnerability to credential theft attacks such as Pass the Hash (PtH). While there are many similar attacks, the underlying strategy is generally the same: steal a “secret” (such as an NTLM hash) from an end-point where it has been cached in memory (recall the third step of challenge-response), and use it to access resources to which one would otherwise not be granted access. (To be clear, PtH is not a Windows-specific attack.)

A thorough discussion of PtH and credential theft is not really my focus here. (For a thorough discussion, check out my friend Mark Simos’s excellent video and work at https://aka.ms/pth.) For now, think of PtH as akin to using a fake ID. The account whose context is being used is still the account of record; but in its context, an attacker has stolen another “ID” (or hash) and used it to grant itself access to a resource that is prohibited (or to which the original account has not been authorized). In the physical world, maybe this “resource” is a bar or six-pack of beer; in AD terms, it’s generally a computing resource or data set.

By using free tools (the same tools used by attackers), one can easily see the problem. Take a look at these commands executed with Windows Credential Editor (WCE).

In a nutshell, we have just attacked NTLM by stealing very powerful credentials from memory and re-using them. This attack required no knowledge of NTLM, Kerberos or AD. And, in the example above, the attacker was able to achieve domain dominance.

What to do about NTLM?

We haven’t talked about NTLM versions yet. Prior to the ascendancy of AD and Kerberos, Microsoft released two (or more precisely three) revisions to the NLTM standard:

  • NT Lan Manager
  • NTLMv1
  • NTLMv2

All three have been long deprecated; it’s safe to say that NT Lan Manager (“LanMan” for short) offers no real protection, NTLMv1 offers limited protection, and NTLMv2 – being the latest revision – is the better of the three. In terms of resistance to cracking, LanMan and NTLMv1 cannot be used securely (short of encrypting the transport with something like IPSec); only NTLMv2 may be considered secure from this standpoint.

To be clear, all three are vulnerable to credential theft attacks like PtH. (To be very clear, Kerberos is vulnerable to similar attacks like Pass the Ticket.) That said, there are three compelling reasons for removing NTLMv1 from your environment, even if you leave NTLMv2 for backward compatibility (and you will probably need to, at least for awhile).

First, it is far less complex for an attacker to anticipate the challenge length in NTLMv1, as it is always a 16-byte random number. NTLMv2, on the other hand, uses a challenge of variable length.

Second, for challenge encryption, recall that NTLMv1 uses DES encryption, whereas NTLMv2 uses the stronger HMAC-MD5. As of now, it’s not feasible to brute-force HMAC-MD5. (Quantum computing will change everything, but that’s a different story.)

Third, even if you’re somehow not concerned about the security vulnerabilities inherent in NTLMv1, you’ll have to remove it to use Windows 10 Credential Guard (and you should absolutely use Credential Guard). Because NTLMv1 is so much less secure, the only protocols that Credential Guard supports are Kerberos and NTLMv2. You’ll have to address the sources of NTLMv1 before using Credential Guard; else it won’t end well when you disallow NTLMv1.

NTLM versions are easily configurable via Group Policy (GPO) at Computer ConfigurationWindows SettingsSecurity SettingsLocal PoliciesSecurity OptionsNetwork security: LAN Manager authentication level with six different options ranging from 0 (least secure) to five (most).

A rule of thumb that I encourage customers to embrace is to start auditing with level three. Level three sets NTLMv2 as the default, but allows for fallback to the older protocol version. Eventually, we’ll want to get to level five, but we’re not ready yet. We need to audit, and that’s where VAST comes in.

The Challenge (so to speak)

At least three native methods of logging NTLM traffic exist. For our project and because we need to explicitly log the version, we’ll focus on Event ID 4624, An account was successfully logged on.

The good news is that this is a very high-value, verbose event. It clearly tells us the account, originating Workstation name and its IP address (to be exact, this is the workstation name and IP of the computer that last chained the NTLM request). Critically for us here, 4624 also shows us the Package Name, which in this case is NTLMv1. So we can construct a pretty good story out of this data: The built-in Administrator account authenticated using NTLMv1 to the local machine from computer Workstation1, which has IP 192.168.2.61.

Inspecting this event is efficient enough, but look closely. In my lab (as in most environments), I have thousands of events to comb through – in this case over 28,000 of them on this single machine. This is partly because 4624 tracks all successful logons – not just those using NTLM.

Once again – what we have here is a problem of big data.

Enter Project VAST

We need to deal with this data in two ways. First, we clearly need to aggregate it. Windows Event Forwarding and SIEMs do this well enough. But in my experience, aggregation is simply not enough. After all, combing through this amount of data, even after aggregation, is not always very realistic; too few organizations are successful using aggregation alone. This brings me to the second necessary factor here: we need to make the big data set consumable and truly actionable.

We’ll start, as we always do in Project VAST, with Azure Log Analytics. Once we have the data aggregated in Azure, we can create Kusto queries to view the data and control the output.

In AD, we weren’t able to easily filter natively to only NTLM authentications; recall that 4624 is a successful logon, regardless of protocol. In Azure Log Analytics, we can easily query on the AuthenicationPackageName field, as we’ve done here. Still, in just 24 hours, we have 209 NTLM logons to sift through. Overall that’s better but we haven’t rendered the data really actionable yet. After all, we want folks to make well-informed, data-centric decisions about their security budget.

A Closer Look

Let’s take a look at Project VAST’s NTLM tab. Recall that here we are exporting the Kusto query out of Azure Log Analytics and importing it into Power BI. This configuration allows Power BI to query Azure Log Analytics directly with no need for intermediary data sources. The NTLM tab in Project VAST allows us to visualize the 4624 data and filter the display in a number of ways.

Start by focusing your attention to the NTLM Version filter that I’ve marked above with the red arrow. Because 4624 includes the Package Name attribute, we can filter to either V1 or V2. For the reasons we’ve discussed earlier in this article, Brian and I have made the decision to default VAST filtering on this page to NTLMv1 only. This will help you focus on the less secure authentication traffic patterns.

Directly above the filter NTLM Version is a filter titled isAdmin. Because Project VAST both queries your AD for members of built-in groups (like Domain Admin and Server Operators, for example) and also allows you to specify Administrative accounts, we can filter only admin accounts, non-admin accounts or (as we’ve done here), not filter on either. This view is therefore displaying the NTLMv1 authentication traffic for both admin and non-admin accounts. This is a good place to start.

Below the filter for NTLM Version, have a look at NTLM Auth by Account (Top 5). As in tabs we’ve discussed previously, the yellow bars represent NTLM traffic from non-Admin accounts; the red, for Admin.

If you’ve read the previous entries, this look and feel should be becoming familiar. In the upper left-hand corner, we have represented the flow of data of the top five NTLM authentications. In my lab, there are only two due to the size, 192.168.2.57 and -56, each sending authentications against DCs 1 and 2.

Now that we have an idea of the most significant culprits and authentication flows, let’s drill down to some truly actionable data. By clicking on one of the accounts in NTLM Auth by Account, we can examine data that solely pertains to that account and the other filters that we have applied. Let’s click on svc7.

On display now is only the data pertaining to svc7’s NTLMv1 authentication flows within our data set. We can easily see the host IP, the authenticating DCs, timestamp information, and some raw data. We now have the understanding that we need in order to take action – starting with determining the process running on 192.168.2.56 responsible for NTLMv1 traffic. Next we’ll work with application owners, vendors, or infrastructure teams to change the traffic over to NTLMv2 or Kerberos.

In other words, we will have surfaced a vulnerability and then mitigated it – making for a nice story of progress as well as return on investment along our security roadmap. And like all journeys, our work with NTLMv1 won’t last indefinitely. Once we’re satisfied that we’ve mitigated all of our NTLMv1 traffic (e.g. the NTLM tab in Project VAST, filtered to V1, is blank), then it’s time to change our GPO setting to five. All new NTLM traffic will have to use NTLMv2, since the DCs won’t accept any of the other five levels of negotiation.

That wraps it up for Project VAST and NTLM auditing for now. Good luck, let us know how we can help and, as always, happy auditing.

RPO-RTO Backup y Site Recovery

$
0
0

RPO y RTO en Azure Backup y Azure Site Recovery

 

Hola.

 

En la actualidad la información crece cada vez más, se estima que la información se duplica cada año en las empresas, y el reto mas grande que se presenta es la protección confiable de esa información. Esta protección debe ser desde borrados accidentales hasta desastres naturales. Y para ello se tienen 2 soluciones, respaldos y recuperación de sitios.

 

A menudo se confunden las funcionalidades entre Respaldos y Recuperación de Sitio. Ambos capturan datos y proveen procedimientos de recuperación, pero sus propósitos centrales son diferentes.

Azure Backup respalda datos de servidores y equipos en la nube. Azure Site recovery coordina replicaciones de máquinas virtuales y físicas, así como transferencias de control o “failovers” entre el sitio y la nube. Se necesitan ambas para una solución completa de recuperación de desastres. Su estrategia de recuperación de desastres necesita guardar su información segura y recuperable (Respaldos o Backup) así como mantener sus cargas de trabajo disponibles y accesibles (Site Recovery) cuando ocurran eventualidades.

 

Para entender las diferencias de oportunidad entre Respaldo y recuperación de Sitio, debemos tener en cuenta estos conceptos:

 

RPO – Punto Objetivo de Recuperación

Este concepto se utiliza para definir el tiempo transcurrido desde la última replicación o punto de recuperación de datos, y el momento de la eventualidad de interrupción de servicio, y representa la potencial pérdida de datos en el plan de continuidad de negocio.

 

RTO – Tiempo Objetivo de Recuperación

Este concepto se utiliza para definir el tiempo transcurrido desde que ocurre la eventualidad de interrupción del servicio hasta que mis sistemas están cien por ciento en operación para los usuarios finales.

 

 

RESPALDOS

La solución de respaldos obtiene copias de seguridad de la información tales como archivos y bases de datos contra perdida de información, errores humanos y descomposturas de equipos. En otras palabras, hacer copias de la información en distintos puntos en el tiempo para poder recuperarla cuando la necesitemos. Dependiendo de los requerimientos de las empresas, tenemos horarios de respaldos (típicamente a la media noche) y periodos de resguardo semanales, mensuales y anuales. Es por esta razón que los respaldos son ideales para la recuperación de información histórica.

 

De modo que típicamente tenemos puntos de recuperación diarios, lo cual, en caso de desastre, nuestro punto más cercano de recuperación es la noche anterior, pudiendo dar un RPO de hasta 20 horas, perdiendo la información de todas las operaciones del día en curso.

 

Sabemos que la recuperación de respaldos puede llevar grandes cantidades de tiempo, así que, en caso de emergencia, recuperar los equipos en base a respaldos puede tener un gran efecto en la continuidad del negocio, hasta de días dependiendo del volumen de la información a recuperar, dando un RTO muy elevado con impacto a la inoperatividad del negocio.

 

 

RECUPERACION DE SITIOS

Por otro lado, las soluciones de recuperación de sitios tienen el objetivo de recuperar la operación del negocio lo más rápido posible con mínima perdida de información, esto se obtiene con replicación frecuente de la información hacia un centro de datos alterno, ya sea físico, o en la nube para poder operar en el menor tiempo posible y con la menor perdida de información en caso de algún evento que interrumpa los servicios en el centro de datos original.

 

Azure Site Recovery permite que Azure se utilice como centro de datos de recuperación de desastre para sus máquinas virtuales. En un mundo en el que todos esperan conectividad ininterrumpida, es más importante que nunca mantener la infraestructura y las aplicaciones en funcionamiento. El propósito de continuidad del negocio y recuperación ante desastres (BCDR) es restaurar componentes con errores para que la organización pueda reanudar rápidamente las operaciones normales.

 

Es crucial para la planeación de BCDR que el Tiempo Objetivo de recuperación (RTO) y el Punto Objetivo de recuperación (RPO) se definan como parte de un plan de recuperación ante desastres. Cuando se produzca un desastre en el centro de datos, con Azure Site Recovery, los clientes pueden poner en línea rápidamente (con bajo RTO) sus máquinas virtuales replicadas ubicadas en el centro de datos secundario o en Microsoft Azure con pérdida mínima de datos (RPO bajo).

 

El servicio de Recuperación de Sitios o Centros de Datos contribuye a una solución robusta de Recuperación de Desastres que protege los servidores e información automatizando la replicación y transferencia de servicio hacia Azure o a un Sitio de Datos secundario.

 

 

Por lo tanto, las principales diferencias entre los objetivos de las soluciones de Respaldos y de Recuperación de Sitios las podemos resumir en:

Concepto

Detalles Respaldo

Recuperación de Desastres

Recovery Point Objective (RPO) – Punto Objetivo Recuperación La cantidad de información perdida que es aceptable en caso de necesitar recuperación del Sitio Soluciones de respaldo tienen amplia variedad en el RPO aceptable. Respaldos usualmente tienen un RPO de un día (Respaldos diarios), mientras que respaldos de BD tienen RPO tan bajo como 15 minutos Soluciones de Recuperación de desastres tienen RPOs extremadamente bajos. La copia de recuperación puede ser de unos pocos minutos.
Recovery Time Objective (RTO) – Tiempo Objetivo de Recuperación Cantidad de tiempo que toma completar la recuperación de los servicios. Debido a los RPOs inherentes a los Respaldos, la cantidad de información que un Respaldo necesita procesar es típicamente muy grande. Esto lleva a RTOs grandes. Por ejemplo, podría tomar días para restaurar la información de cintas, dependiendo del tiempo que tome transportar las cintas al sitio de recuperación. Las soluciones de Recuperación de desastres tienen RTO mucho menor debido a que están basadas en sincronizaciones con los servidores fuentes, por lo tanto, se necesitan procesar menos cambios.
Retención Cuanto tiempo necesita almacenarse la información Para escenarios que requieren recuperación de operaciones (datos corruptos, borrados accidentales, fallas del OS), los respaldos típicamente se retienen 30 días o menos.

Para cumplimiento de normas, los datos pueden almacenarse por meses o años. Los respaldos son ideales para estas situaciones de información histórica.

Recuperación de desastres solo necesita recuperar la operación de los datos. Esto típicamente toma algunas horas o hasta un día. Debido a la captura de datos detallada usada en las soluciones de Recuperación de Desastres, no es recomendado tener puntos de retención por largos periodos.

 

 

Gracias y esperamos que sea de su agrado.

 

Saludos

 

Mariano Carro

Enviar correo a latampts

 

 

 

 

 

Watch the Windows Server Summit on demand

$
0
0

I'm not the type of person who sets an alarm at an unappealing time of the morning to watch an online event, especially knowing that I can watch the replay short afterwards. No, I'm not talking about the World Cup, I'm talking about the Windows Server Summit. I'm still working my way through the sessions that are most important to me, and over on the Storage at Microsoft blog, Cosmos Darwin has posted the five big announcements for Storage Spaces Direct (S2D) and Hyper-Converged Infrastructure (HCI). Note that Cosmos is focused on storage, so his top 5 list could be quite different to yours or mine. Once we've got an understanding of the SKU lineup inclusions I'll put something similar together for features that are in the Standard edition.

Cosmos' top 5 listing is...

Go bigger, up to 4PB

The new maximum size per storage pool is 4 petabytes (PB), or 4,000 terabytes. All related capacity guidelines and/or limits are increasing as well: for example, Storage Spaces Direct in Windows Server 2019 supports twice as many volumes (64 instead of 32), each twice as large as before (64 TB instead of 32 TB).

True two-node at the edge

Need to set up a two-node cluster in a branch or disconnected location? Want to use the USB drive capability of your router to act as the witness? Well, provided your router supports SMB2 (no, not SMB1) this is something that can now be done. New documentation is coming that lists the compatible hardware, and it might be a gentle reminder to those with older routers that haven't received security updates for a while that it might be time to get them up to date or to replace them.

Drive latency outlier detection

Drives with abnormal behavior, whether it’s their average or 99th percentile latency that stands out, are automatically detected and marked in PowerShell and Windows Admin Center as “Abnormal Latency” status. This gives Storage Spaces Direct administrators the most robust set of defenses against drive latency available on any major hyper-converged infrastructure platform.

Faster mirror-accelerated parity

In Windows Server 2019, the performance of mirror-accelerated parity has more than doubled relative to Windows Server 2016! Mirror continues to offer the best absolute performance, but these improvements bring mirror-accelerated parity surprisingly close, unlocking the capacity savings of parity for more use cases.

Greater hardware choice

Since Ignite 2017, the number of available hardware SKUs has nearly doubled, to 33. To deploy Storage Spaces Direct in production, Microsoft recommends Windows Server Software-Defined hardware/software offers from our partners, which include deployment tools and procedures. They are designed, assembled, and validated against our reference architecture to ensure compatibility and reliability, so you get up and running quickly.

Head on over to read to read the full post.

SharePoint 2016 | CORS | JavaScript/CSOM calls not working/loading in Edge or Chrome when accessing site through Reverse Proxy URL or Network Load Balancer. SharePoint throwing 403 forbidden error.

$
0
0

SYMPTOM
Symptom 1: SharePoint is showing unexpected response (403 error) in Edge or Chrome Browsers but not in Internet Explorer whenever a call to client.svc/ProcessQuery is sent to the server as an incoming request.

For example, after adding a people column to a document library and typing in a username, test

Symptom 2: SharePoint is showing unexpected response (403 error) in Edge or Chrome Browsers but not in Internet Explorer running JavaScript from a content editor web part.

CAUSE
SharePoint 2016 has a security feature that will compare the actual request URL with the request origin header. If they don't match, the request will be rejected with status 403.

In order to verify if this is the problem, add a hosts file entry to your local client machine that resolves the SharePoint web site URL to a SharePoint Web front end server IP address to bypass the Network Load Balancer or Reverse Proxy.

RESOLUTION
Microsoft recommends configuring a rule in your Reverse Proxy or Network Load Balancer to adjust the origin to match the original request.

In case you don't have access to this, you can create a re-write rule in IIS. Implement the following IIS inbound rewrite rules to overcome the 403 error for JavaScript/CSOM calls not working/loading when accessing site through Reverse Proxy or Network Load Balancer URL.

Before trying out anything you find on the internet, make sure you are in a testing environment and have known good backups.

1.  Make sure URL Rewrite is available
               Download and install the IIS rewrite module: https://www.IIS.net/downloads/microsoft/URL-rewrite
               Close and reopen IIS

2.  Configure Rewrite Rules and add Server Variables:
               Go to your SharePoint site.
               Click on rewrite URL:



On the Right under Actions, click on View Server Variables
- Add this to allowed server variables:
HTTP_Origin
HTTP_HOST


Click on Back to Rules under Actions menu on the right. Then, click on Create an inbound rule:


- Create a new inbound rule
- Add this as regular expression filter:
.svc.+
- In Server Variables, click Add
- Use this information:
Name: HTTP_Origin
Value: http://{HTTP_HOST}
- For action choose 'None'
- Save the rule

- Create another new inbound rule to allow rewrite for the java scripts
- Add this as regular expression filter:
_api.+
- In Server Variables, click Add
- Use this information:
Name: HTTP_Origin
Value: http://{HTTP_HOST}
- For action choose 'None'
- Save the rule


In application.config you would see something like this (there may be other variables for other rules but leave them alone, make sure that these two are included)
<rewrite>

<allowedServerVariables>

<add name="HTTP_Origin" />

<add name="HTTP_HOST" />

</allowedServerVariables>

</rewrite>

In web.config, you should see this:

<rewrite>

<rules>

<clear />

                <rule name="Origin">

                    <match URL=".svc.+" />

                    <serverVariables>

                        <set name="HTTP_Origin" value="http://{HTTP_HOST}" />

                    </serverVariables>

                    <action type="None" />

                </rule>

<rule name="Origin2">

                    <match URL="api.+" />

                    <serverVariables>

                        <set name="HTTP_Origin" value="http://{HTTP_HOST}" />

                    </serverVariables>

                    <action type="None" />

                </rule>

</rules>

MORE INFORMATION

https://support.microsoft.com/en-us/help/2818415/supportability-of-rewrites-and-redirects-in-sharepoint-2013-2010-and-2

DATA ANALYSIS:

From the client machine where you just configured the fiddler, browse to the site with the public URL for the zone.

From Fiddler: Note the origin and header in the Headers tab in the upper right and find the correlation id to search for in your SharePoint logs in the miscellaneous section in the lower right listed as request-id or SPRequestGuid:

Symptom 1: Fiddler


05/29/2018 18:45:08.23    w3wp.exe (0x1DF4)    0x2618    SharePoint Foundation    Logging Correlation Data    xmnv    Medium    Name=Request (POST:http://sp.contoso.com/sites/corstest/_vti_bin/client.svc/ProcessQuery)    06046c9e-92a0-40c7-b2ae-2165c547d61c

05/29/2018 18:45:08.24    w3wp.exe (0x1DF4)    0x1804    SharePoint Foundation    CSOM    agw10    Medium    Begin CSOM Request ManagedThreadId=6, NativeThreadId=6148    06046c9e-92a0-40c7-b2ae-2165c547d61c

05/29/2018 18:45:08.24    w3wp.exe (0x1DF4)    0x1804    SharePoint Foundation    CSOM    azvn3    Medium    Request is a Cross-Origin request. Origin is : 'http://melissa.contoso.com'. Host is : http://sp.contoso.com/_vti_bin/client.svc/ProcessQuery    06046c9e-92a0-40c7-b2ae-2165c547d61c

05/29/2018 18:45:08.24    w3wp.exe (0x1DF4)    0x1804    SharePoint Foundation    CSOM    azvn4    Medium    Request is a Cross-Origin request for a user that was not authenticated using OAuth. Returning 403    06046c9e-92a0-40c7-b2ae-2165c547d61c

05/29/2018 18:45:08.24    w3wp.exe (0x1DF4)    0x1804    SharePoint Foundation    CSOM    aiv4g    Medium    OnBeginRequest returns false, do not need to continue process the request.    06046c9e-92a0-40c7-b2ae-2165c547d61c

05/29/2018 18:45:08.24    w3wp.exe (0x1DF4)    0x0934    SharePoint Foundation    Runtime    aoxsq    Medium    Sending HTTP response 403 for HTTP request POST to http://sp.contoso.com/_vti_bin/client.svc/ProcessQuery    06046c9e-92a0-40c7-b2ae-2165c547d61c

05/29/2018 18:45:08.24    w3wp.exe (0x1DF4)    0x0934    SharePoint Foundation    Monitoring    b4ly    Medium    Leaving Monitored Scope: (Request (POST:http://sp.contoso.com/sites/corstest/_vti_bin/client.svc/ProcessQuery)) Execution Time=14.3752; CPU Milliseconds=10; SQL Query Count=0;Parent=None    06046c9e-92a0-40c7-b2ae-2165c547d61c


Symptom 2:

Right click on the SPRequestGuid in the right hand lower left section, copy value only, then go open up the SP ULS and search for the correlation id:

Remember, we browsed to http://melissa.contoso.com. Here is an excerpt from the correlation id in this instance:

05/21/2018 18:49:04.38    w3wp.exe (0x163C)    0x1E10    SharePoint Foundation    Logging Correlation Data    xmnv    Medium    Name=Request (POST:http://sp.contoso.com/sites/corstest/_api/contextinfo)    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.38    w3wp.exe (0x163C)    0x1E10    SharePoint Foundation    General    adyrv    High    Cannot find site lookup info for request Uri http://sp.contoso.com/sites/corstest/_api/contextinfo.    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.38    w3wp.exe (0x163C)    0x1E10    SharePoint Foundation    Audience Validation    a9fy7    Medium    The audience uri loads a web application matches. AudienceUri: 'http://melissa.contoso.com/', InputWebApplicationId: '8e26ceaa-446b-45bc-ba30-4fc65baeec0f', InputURLZone: 'Default'.    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.39    w3wp.exe (0x163C)    0x1D24    SharePoint Foundation    CSOM    agw10    Medium    Begin CSOM Request ManagedThreadId=42, NativeThreadId=7460    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.39    w3wp.exe (0x163C)    0x1D24    SharePoint Foundation    CSOM    azvn3    Medium    Request is a Cross-Origin request. Origin is : 'http://melissa.contoso.com'. Host is : http://sp.contoso.com/_vti_bin/client.svc/contextinfo    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.39    w3wp.exe (0x163C)    0x1D24    SharePoint Foundation    CSOM    azvn4    Medium    Request is a Cross-Origin request for a user that was not authenticated using OAuth. Returning 403    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.39    w3wp.exe (0x163C)    0x2368    SharePoint Foundation    General    adyrv    High    Cannot find site lookup info for request Uri http://sp.contoso.com/sites/corstest/_api/contextinfo.    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.39    w3wp.exe (0x163C)    0x2368    SharePoint Foundation    Runtime    aoxsq    Medium    Sending HTTP response 403 for HTTP request POST to http://sp.contoso.com/_vti_bin/client.svc/contextinfo    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.39    w3wp.exe (0x163C)    0x2368    SharePoint Foundation    General    azrx9    Medium    LookupHostHeaderSite: Using site lookup provider Microsoft.SharePoint.Administration.SPConfigurationDatabaseSiteLookupProvider for host-header site-based multi-URL lookup string http://sp.contoso.com/sites/corstest for request Uri http://sp.contoso.com/sites/corstest/_api/contextinfo.    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.39    w3wp.exe (0x163C)    0x2368    SharePoint Foundation    General    adyrv    High    Cannot find site lookup info for request Uri http://sp.contoso.com/sites/corstest/_api/contextinfo.    1271699e-0247-40c7-b2ae-2e61ad704f51

05/21/2018 18:49:04.39    w3wp.exe (0x163C)    0x2368    SharePoint Foundation    Monitoring    b4ly    Medium    Leaving Monitored Scope: (Request (POST:http://sp.contoso.com/sites/corstest/_api/contextinfo)) Execution Time=34.3893; CPU Milliseconds=21; SQL Query Count=14; Parent=None    1271699e-0247-40c7-b2ae-2e61ad704f51

Here in the network trace, we can see the request is coming from 192.168.2.51 – this is where the reverse proxy is running and we can see SharePoint (192.168.2.53) reply with the 403 Forbidden error message. Note the host and origin are highlighted and they are not matching resulting in the 403 error message.


SETUP/SCENARIO

Symptom 1:

  1. Configure environment with a path based site collection.
  2. Create a document library
  3. Add a person column to the library
  4. In Chrome, browse the library with reverse proxy URL


Symptom 2:

  1. Configure environment with a path based site collection.
  2. Configure the site collection to run JavaScript from a content editor web part
    1. Have a content editor web part configured on a page, for example: http://sp.contoso.com/sites/corstest/SitePages/example.aspx
    2. Upon editing the content editor web part, there is a content link set to /sites/corstest/SiteAssets/example.js
  3. Find the example.js code at the end of the post.

CONFIGURATION

  1. Starting config - Alternate Access Mappings

    Prior to configuring the SharePoint to use a different URL and configure the reverse proxy:

    The web app URL: http://sp


    The alternate access mapping:


  2. modified config – AAM's. FYI, AAM's are deprecated in SP 2016.

    Configured AAM for the "new" URL:


    Which automatically updates the web application URL:


    Add a DNS entry or hosts file for melissa.contoso.com.

    Irrespective of browser, there is no issue or 403 error browsing to http://melissa.contoso.com or loading the example.aspx page referencing the JavaScript.

    *Please note if the public URL for the zone is added as https, then on the SP servers in IIS it will be necessary to add a binding for https port 443 and an SSL certificate.

  3. Configure Fiddler as a reverse proxy. There's lots of documentation and videos on this, but here is the short of it.

    Tools, Options, Connections tab: check "Allow remote computers to connect"


Then, back at the menu bar, select Rules, Customize rules, and the Fiddler ScriptEditor window should open. From its menu, click on Go, click to OnBeginRequest.

Add the following (with your own URL's, of course) after the comments in the section:

static function OnBeforeRequest(oSession: Session) {

    if (oSession.HostnameIs("melissa.contoso.com"))

    {

        oSession.hostname="sp.contoso.com";

    }

JAVASCRIPT CODE EXAMPLE

The sample.js contents are:

<html>

<head>

<title>Cross-domain sample</title>

</head>

<body>

<!-- This is the placeholder for the announcements -->

<div id="renderAnnouncements"></div>

<script

type="text/javascript"

src="//ajax.aspnetcdn.com/ajax/jQuery/jquery-1.7.2.min.js">

</script>

<script type="text/javascript" src="//ajax.aspnetcdn.com/ajax/4.0/1/MicrosoftAjax.js"></script>

<script type="text/javascript" src="/_layouts/15/sp.runtime.js"></script>

<script type="text/javascript" src="/_layouts/15/sp.js"></script>

<script type="text/javascript">

//var hostwebURL;

//var appwebURL;

// Load the required SharePoint libraries

$(document).ready(function () {

SP.SOD.executeFunc('sp.js', 'SP.ClientContext', getProjectURL);

// //Get the URI decoded URLs.

// hostwebURL =

// decodeURIComponent(

// getQueryStringParameter("SPHostURL")

// );

// appwebURL =

// decodeURIComponent(

// getQueryStringParameter("SPAppWebURL")

// );

// resources are in URLs in the form:

// web_URL/_layouts/15/resource

var scriptbase = getProjectURL() + "/_layouts/15/";

// Load the js files and continue to the successHandler

$.getScript(scriptbase + "SP.RequestExecutor.js", execCrossDomainRequestA);

});

// Function to prepare and issue the request to get

// SharePoint data

function execCrossDomainRequest() {

// executor: The RequestExecutor object

// Initialize the RequestExecutor with the add-in web URL.

var executor = new SP.RequestExecutor(appwebURL);

// Issue the call against the add-in web.

// To get the title using REST we can hit the endpoint:

// appwebURL/_api/web/lists/getbytitle('listname')/items

// The response formats the data in the JSON format.

// The functions successHandler and errorHandler attend the

// sucess and error events respectively.

executor.executeAsync(

{

URL:

appwebURL +

"/_api/web/lists/getbytitle('Announcements')/items",

method: "POST",

headers: { "Accept": "application/json; odata=verbose" },

success: successHandler,

error: errorHandler,

crossDomain: true

}

);

}

function successHandlerA(data, req) {

var announcementsHTML = "";

var enumerator = allAnnouncements.getEnumerator();

while (enumerator.moveNext()) {

var announcement = enumerator.get_current();

announcementsHTML = announcementsHTML +

"<p><h1>" + announcement.get_item("Title") +

"</h1>" + announcement.get_item("Body") +

"</p><hr>";

}

document.getElementById("renderAnnouncements").innerHTML =

announcementsHTML;

}

// Function to handle the success event.

// Prints the data to the page.

function successHandler(data) {

var jsonObject = JSON.parse(data.body);

var announcementsHTML = "";

var results = jsonObject.d.results;

for (var i = 0; i < results.length; i++) {

announcementsHTML = announcementsHTML +

"<p><h1>" + results[i].Title +

"</h1>" + results[i].Body +

"</p><hr>";

}

document.getElementById("renderAnnouncements").innerHTML =

announcementsHTML;

}

// Function to handle the error event.

// Prints the error message to the page.

function errorHandler(data, errorCode, errorMessage) {

document.getElementById("renderAnnouncements").innerText =

"Could not complete cross-domain call: " + errorMessage;

}

function execCrossDomainRequestA() {

// context: The ClientContext object provides access to

// the web and lists objects.

// factory: Initialize the factory object with the

// app web URL.

var addinwebURL = getProjectURL();

var context = new SP.ClientContext(addinwebURL);

var factory =

new SP.ProxyWebRequestExecutorFactory(

addinwebURL

);

context.set_webRequestExecutorFactory(factory);

//Get the web and list objects

// and prepare the query

var web = context.get_web();

var list = web.get_lists().getByTitle("Announcements");

var camlString =

"<View><ViewFields>" +

"<FieldRef Name='Title' />" +

"<FieldRef Name='Body' />" +

"</ViewFields></View>";

var camlQuery = new SP.CamlQuery();

camlQuery.set_viewXml(camlString);

allAnnouncements = list.getItems(camlQuery);

context.load(allAnnouncements, "Include(Title, Body)");

//Execute the query with all the previous

// options and parameters

context.executeQueryAsync(

successHandlerA, errorHandler

);

}

function getProjectURL() {

var URLToReturn = "";

var baseURL = document.URL.split("/");

//URLToReturn = baseURL[0] + "//" + baseURL[2] + _spPageContextInfo.siteServerRelativeURL + pageURL + "?" + queryStringKey + "=" + queryStringValue + "&" + categoryString + "&ViewMode=1";

URLToReturn = baseURL[0] + "//" + baseURL[2] + _spPageContextInfo.siteServerRelativeURL;

return (URLToReturn);

}

// Function to retrieve a query string value.

// For production purposes you may want to use

// a library to handle the query string.

function getQueryStringParameter(paramToRetrieve) {

var params =

document.URL.split("?")[1].split("&amp;");

var strParams = "";

for (var i = 0; i < params.length; i = i + 1) {

var singleParam = params[i].split("=");

if (singleParam[0] == paramToRetrieve)

return singleParam[1];

}

}

</script>

</body>

</html>

The End 🙂
Thanks for reading, thanks for you and thanks for all those who came before us. Share your experience and submit your suggestions for SharePoint here: https://sharepoint.uservoice.com

July 2018 Hot Sheet partner training schedule

$
0
0

Welcome to the US Partner Community Hot Sheet, a comprehensive schedule of partner training, webcasts, community calls, and office hours. This post is updated frequently as we learn about new offerings, so you can plan ahead. Looking for product-specific training? Try the links across the top of this blog.

Community call schedule

Community calls for the US Partner Community are led by experts from across the US Partner Team, and provide practice-building and business-building guidance.

Community name

July calls information

August calls information

Applications & Infrastructure

No call in July. Look for new schedule soon

No call in August. Look for new schedule soon

Azure Government

No call in July. Look for new schedule soon

No call in August. Look for new schedule soon

Business Applications

No call in July. Look for new schedule soon

No call in August. Look for new schedule soon

Cloud Services Partner Incentives

July 26

Call schedule will be available soon

Data & Artificial Intelligence (AI)

No call in July. Look for new schedule soon

No call in August. Look for new schedule soon

Marketing SureStep Office Hours

Every Thursday

Every Thursday

Modern Workplace – Productivity

No call in July. Look for new schedule soon.

No call in August. Look for new schedule soon.

Modern Workplace –  Security

No call in July. Look for new schedule soon

No call in August. Look for new schedule soon

Modern Workplace – Windows & Devices

No call in July. Look for new schedule soon.

No call in August. Look for new schedule soon.

MPN 101

July 11 - Know before you go to Microsoft Inspire

Call schedule will be available soon

Open Source Solutions

No call in July. Look for new schedule soon

No call in August. Look for new schedule soon

Partner Insider

No call in July. Look for new schedule soon

Call schedule will be available soon

Week of June 25–29

Date

Location

Course, webcast or call

Who should attend

June 26

Online Creating apps for the Intelligent Cloud: Serverless and integration scenarios

Technical roles

June 26

Online

Getting started with Azure Stack

Technical roles

June 27

Community call

MPN 101: Know before you go to Microsoft Inspire 2018

Business roles

June 27

Online Adopting Microsoft 365 Proactive Attack Detection and Prevention

Technical roles

June 27

Online Getting started with Partner Center CSP – Technical scenarios

Technical roles

June 27

Online Enhance your business with Dynamics 365 PowerApps and Flows

Business and technical roles

June 29

Online What's new in Azure Infrastructure as a service

Technical roles

Week of July 2–6

Date

Location

Course, webcast or call

Who should attend

July 5

Online

Azure Stack architecture & deployment

Technical roles

Week of July 9–13

Date

Location

Course, webcast or call

Who should attend

July 10

Online

Introduction to Skype for Business

Technical roles

July 10

Online

What's new & highlights in Business Applications

Business and technical roles

July 11

Community call

MPN 101: Know before you go to Microsoft Inspire 2018

Business roles

July 11

Online

What’s new in Office 365

Business and technical roles

July 11

Online

Introduction to Microsoft 365 Deployment

Technical roles

July 12

Online

Enhance your business with Skype for Business Online Academy

Technical roles

July 12

Online

Partner Center CSP – Application onboarding

Technical roles

July 13

Online

Creating apps for the Intelligent Cloud: Architecting cloud apps for scale

Technical roles

Week of July 16-20

Date

Location

Course, webcast or call

Who should attend

July 15-19

Las Vegas, NV

Microsoft Inspire

Business, sales, and technical roles

July 16

Online

Introduction to Microsoft 365 Management

Technical roles

July 17

Online

What’s new and highlights in Business Applications Business and technical roles

July 17

Online

Adopting Microsoft Teams

Technical roles

July 18

Online

Adopting Microsoft 365 powered device: Deployment

Technical roles

July 19

Online

Cortana Intelligence Suite: Big Data Analytics using Data Lake

Technical roles

Week of July 23-27

Date

Location

Course, webcast or call

Who should attend

July 22–24

Seattle, WA

Microsoft Business Applications Summit

Analysts, Business Users, IT Professionals, Developers and Microsoft Business Applications Partners

July 24

Online

Adopting Microsoft 365 powered device: Management

Technical roles

July 24

Online

Introduction to Azure Site Recovery and Backup

Technical roles

July 24

Online

Introduction to Dynamics 365 Customer Engagement: Technical onboarding

Technical roles

July 25

Online

Introduction to Microsoft Azure IaaS

Technical roles

July 26

Community call

Cloud Services Partner Incentives

Business roles

July 26

Online

Migrating Applications to Microsoft Azure

Technical roles

July 26

Online

What's new in Azure Infrastructure as a service

Technical roles

July 26

Online

Introduction to Dynamics 365 Customer Engagement: Basics of customization

Technical roles

Week of July 30–August 3

Date

Location

Course, webcast or call

Who should attend

July 30

Online

Introduction to Microsoft 365 Security and Compliance

Technical roles

Microsoft 2018 events

Microsoft Inspire 2018: July 15–19 in Las Vegas, Nevada

Microsoft Ignite 2018: September 24–28 in Orlando, FL

Virtual 2018 U.S. One Commercial Partner (OCP) Partner Briefing (on demand)

Dynamics 365 PSA 導入支援サービス

$
0
0

[提供: アバナード株式会社]

Dynamics 365 for Project Service Automation (PSA)を拡張し、業務に合わせて最適化

 

サービス提供のライフサイクル(営業、運用、クローズ)全体を支援。
サービス事業特有の収支要因であるプロジェクト管理やリソース管理を効率化・最適化するとともに、Microsoft 社のテクノロジーとの連携により、サービス提供ライフサイクル全体で業務支援を実現します。

 

■解決される課題

顧客に合わせてプロジェクトを構成し、プロジェクトにかかわる人材、コスト、部材を一括管理。従業員の生産性を向上しつつ、 予算内でスケジュールどおりプロジェクトを完了させます。

 

■料金

個別対応なので、お問い合わせください。

 

■対象業種

製造、流通など

 

■対応エリア

全国

 

 

 


[ウェビナー] 60分で習得する、Azure へのサーバー移行基礎知識【6/28 更新】

$
0
0

<開催日時>

2018年7月13日(金) 12:00-13:00

 

<概要>
オンプレミスのサーバーを、Azure に移行する場合の考え方、ツールなどを解説致します。
2008の EOS 対策などを含め、オンプレミスサーバーのクラウド移行をご検討中の方は是非ご参加ください。

 

<アジェンダ>
・Azure の仮想マシン環境と注意点
Azure の仮想マシン環境、移行時の注意点について簡単に説明致します。
・Azure Migrate による既存環境調査
Azure Migrate という、オンプレミス仮想環境調査ツールについて説明致します。
・移行ツールによるAzure へのサーバーマイグレーション
Azure Site Recovery を中心に、他ツールも含めたサーバーマイグレーションについて解説致します。

 

<参考>
本セッションは、オンプレミスの Hyper-V や VMware 仮想環境について提案・設計・構築・運用のいずれかの経験がある方を対象としております。
また、Azure IaaS についてはセッションに軽く含めますが、Azure IaaS を事前に理解されたい方は、「Azure IaaS の基礎から VM サイズ選択方針まで一気に理解してしまおう!」をオンデマンドでご確認ください。

 

ウェビナーの参加登録はこちら

 

 

Outlook のバージョンによってブロックされる添付ファイルの種類が異なる

$
0
0

こんにちは。日本マイクロソフト Outlook サポート チームです。

安全に Outlook をご利用いただくため、Outlook では既定でいくつかの添付ファイルがブロックされます。
ブロックされる添付ファイルの種類は、Outlook のバージョンや更新プログラムの適用状況によって異なります。
2018 年 6 月 28 日時点では以下の通りです。

 

Outlook 2016 MSI 版 16.0.4573.1000 以降 (2017 年 7 月 の更新プログラム KB4011052 以降の適用)
Outlook 2016 クイック実行版 16.0.8004.1000 以降
ade、adp、app、asp、bas、bat、cer、chm、cmd、cnt、com、cpl、crt、csh、der、diagcab、exe、
fxp、gadget、grp、hlp、hpj、hta、inf、ins、isp、its、jar、jnlp、js、jse、ksh、lnk、mad、maf、mag、
mam、maq、mar、mas、mat、mau、mav、maw、mcf、mda、mdb、mde、mdt、mdw、mdz、
msc、msh、msh1、msh2、msh1xml、msh2xml、mshxml、msi、msp、mst、msu、ops、osd、
pcd、pif、pl、plg、prf、prg、printerexport、ps1、ps2、ps1xml、ps2xml、psc1、psc2、psd1、psdm1、pst、reg、scf、scr、sct、
shb、shs、theme、tmp、url、vb、vbe、vbp、vbs、vsmacros、vsw、webpnp、website、ws、wsc、wsf、wsh、xbap、xll、xnk

 

Outlook 2010 14.0.7188.5000 以降 (2017 年 9 月 の更新プログラムKB4011089 以降の適用)
Outlook 2013 15.0.4963.1000 以降 (2017 年 9 月 の更新プログラム KB4011090 以降の適用)
ade、adp、app、asp、bas、bat、bgi、cer、chm、cmd、cnt、com、cpl、crt、csh、der、exe、
fxp、gadget、grp、hlp、hpj、hta、inf、ins、isp、its、jar、jnlp、js、jse、ksh、lnk、mad、maf、mag、
mam、maq、mar、mas、mat、mau、mav、maw、mcf、mda、mdb、mde、mdt、mdw、mdz、
msc、msh、msh1、msh2、msh1xml、msh2xml、mshxml、msi、msp、mst、ops、osd、
pcd、pif、pl、plg、prf、prg、ps1、ps2、ps1xml、ps2xml、psc1、psc2、pst、reg、scf、scr、sct、
shb、shs、tmp、url、vb、vbe、vbp、vbs、vsmacros、vsw、ws、wsc、wsf、wsh、xbap、xll、xnk

 

補足
以下の資料では「Outlook でブロックされるファイルの種類」を「新しいバージョン」と「Office 2007」に分けて説明しています。
この「新しいバージョン」とは、Outlook 2016 の最新の更新プログラム適用環境を差します。

Outlook でブロックされる添付ファイル

________________________________________
本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

Update: Create-LabUsers Tool

$
0
0

Just when you thought it couldn't get more awesome.

It has.

By popular request, I have added a few new features (and fixed an annoyance).  First, the bug fix:

-Count 1

Yes, it's true. If you ran the Create-LabUsers script with -Count 1 with the -InflateMailboxes parameter, you'd run into an issue because of how I calculated the $MaxRecipients value.  Since I didn't want to totally crush the messaging system, I had elected to set $MaxRecipients to the maximum number of mailbox users / 3.  However, for -Count parameters of 1, this would cause an error with a Get-Random cmdlet, since you couldn't exactly find a random integer between 1 and 1.  It was definitely an oversight on my part--I never imagined that someone would use a bulk user tool to create just one user.

So, fixed.

Now, on to the new stuff!

Middle Name support

Along with pointing out my -Count oops, Darryl also had an idea for populating the AD middle name.  I had originally just populated the middle initial.  This was easy enough, using the first names seed data ($MiddleName = $Names.First[(Get-Random -Minimum 0 -Maximum $Names.First.Count)]) and then setting $MiddleIntial = $MiddleName[0].

Easy peasy.

CreateResourceMailboxes

I might as well call this update the Darryl Chronicle, since this was also one of his requests.  As part of this update, I added a switch to allow you to create Exchange resource mailboxes:

  • Shared Mailboxes: Random number of shared mailboxes assigned per-department, per location
  • Equipment Mailboxes: Each location receives a fixed number (laptops and projectors)
  • Room Mailboxes: Each location receives a fixed number with varying room capacities
  • Room Lists: After creating the room mailboxes, the script will now create per-location room lists (special distribution lists that contain room objects for use with the Room Finder)

The latest version of the script is available on the Technet Gallery at http://aka.ms/createlabusers.

サービス終了まで残り 4 か月: Access Control Service

$
0
0

執筆者: Anna Barhudarian (Principal PM Manager, Cloud Identity)

このポストは、2018 6 25 日に投稿された 4 month retirement notice: Access Control Service の翻訳です。

 

Access Control Service (ACS) は、正式にサービスを終了いたします。現在ご利用中のお客様は、2018 11 7 日まで引き続きご利用いただけますが、それを過ぎると ACS サービスが停止し、すべての要求がエラーとなります。

今回は、ACS サービスの終了に関する当初の発表記事 (英語) の補足事項をお届けします。
 

影響を受けるユーザー

上記の影響を受けるのは、Azure サブスクリプションで ACS 名前空間を 1 つでも作成しているお客様です。たとえば、Service Bus のお客様が Service Bus の名前空間の作成時に ACS 名前空間を間接的に作成したケースなどが該当します。アプリやサービスで ACS を使用していない場合は、特別な対応は必要ありません。
 

必要な対応

ACS を使用している場合は、移行の計画が必要になります。最適な移行パスは、ACS を使用しているお客様の既存のアプリやサービスの状況ごとに異なります。サポートが必要な場合は、移行ガイドをご利用ください。ほとんどの場合、移行の際にコードを変更する必要があります。

アプリやサービスで ACS を使用しているかどうかは、後述の方法で確認できます。2018 4 月に Azure Portal での ACS サービスの提供が終了してからは、名前空間の一覧を確認するためには Azure サポートに問い合わせる必要がありました。しかし、今後はその必要はありません。
 

Access Control Service PowerShell の提供を開始

ACS PowerShell は、Azure クラシック ポータルの ACS 機能を丸ごと置き換えたものです。詳細については、PowerShell ギャラリーの指示に従ってダウンロード (英語) してご確認ください。
 

ACS 名前空間の一覧を表示して削除する方法

ACS PowerShell をインストールしたら、以下の簡単な手順に従って ACS 名前空間を特定し、削除することができます。

1. Connect-AcsAccount コマンドレットを使用して、ACS に接続します。

2. Get-AcsSubscription コマンドレットを使用して、利用可能な Azure サブスクリプションの一覧を表示します。

3. Get-AcsNamespace コマンドレットを使用して、ACS 名前空間の一覧を表示します。

ACS 名前空間が表示される可能性が高いのは、Azure のお客様が 2014 年以前に Azure Service Bus にサインアップしたケースです。これらの名前空間は、–sb という拡張子によって識別できます。Service Bus チームは移行ガイドを提供しており、今後も同チームのブログ (英語) で最新情報をご案内する予定です。

4. Disable-AcsNamespace コマンドレットを使用して、ACS 名前空間を無効にします。

この手順はオプションです。移行が完了したと思われる場合は、名前空間を削除する前に無効にすることをお勧めします。無効にすると、要求に対して https://{名前空間}.accesscontrol.windows.net から 404 エラーが返されます。無効にしなかった場合、名前空間は変更されず、Enable-AcsNamespace コマンドレットを使用して復元することができます。

5. Remove-AcsNamespace コマンドレットを使用して、ACS 名前空間を削除します。

この手順を実行すると、名前空間が完全に削除されて復元できなくなります。
 

問い合わせ先

ACS 終了の詳細については ACS 移行ガイドをご確認ください。お客様に適した移行オプションが見つからない場合、または ACS の終了に関するご質問やご意見がございましたら、acsfeedback@microsoft.com までご連絡ください。

 

Update: Dynamics 365 Testing Tool

$
0
0

Earlier today, I was notified that the Dynamics 365 network URLs page was updated, so I updated my Dynamics test tool.

But then, I thought, what else could I put in it?

Never one to leave well enough alone, I started tinkering.  The result:

  • Updated network tests for crmdynint.com
  • Updated network tests for passport.net endpoints
  • Updated OS detection and reporting in log file.
  • Updated .NET Framework detection method.
  • Updated .NET Framework proxy detection.
  • Updated netsh proxy detection.
  • Updated TLS 1.2 configuration detection.
  • Added browser version detection for Internet Explorer, Edge, Chrome, and Firefox.

And, to boot, I gave it a shiny new URL: http://aka.ms/dynamicstest

System Center Configuration Manager クライアントが利用するプロキシ設定について

$
0
0

みなさま、こんにちは。SCCM サポート チーム の篠木です。

 

本記事では、System Center Configuration Manager Current Branch (以下、 SCCM) クライアントがが利用するプロキシ設定についてご紹介します。

特に、今回は WSUS を単体で利用いただいている場合との違いについて、ご説明をさせて頂きます。

通常、アプリケーションがプロキシ サーバーを通るよう設計いただく場合、IE に設定されるプロキシ設定  (WinINet) と、 WinHTTP という 2つの API が利用されます。

 

WinINet (Windows Internet)

https://docs.microsoft.com/ja-jp/windows/desktop/WinInet/portal

The Microsoft Windows Internet (WinINet) application programming interface (API) enables applications to access standard Internet protocols, such as FTP and HTTP

(抄訳)マイクロソフト Windows Internet (WinINet) APIを使って、 FTP, HTTP のような標準のインターネットプロトコルでアプリケーションが接続できるようできます。

WinHTTP

https://docs.microsoft.com/ja-jp/windows/desktop/WinHttp/about-winhttp

Microsoft Windows HTTP Services (WinHTTP) provides developers with a server-supported, high-level interface to the HTTP/1.1 Internet protocol. WinHTTP is designed to be used primarily in server-based scenarios by server applications that communicate with HTTP servers.

(抄訳)マイクロソフト Windows HTTP Services (WinHTTP) は、サーバーをサポートする、HTTP/1.1 インターネットプロトコルに対するハイレベルなインターフェースを提供します。

WinHTTP はサーバーベースのアプリケーションが HTTP で通信できることを目的に設計されました。


上記
2 つの違いや、プロキシの設定についての詳細につきましては、以下弊社のブログでもご紹介しておりますので、ご確認ください。


ご参考)
IE からみるプロキシの設定について

https://blogs.technet.microsoft.com/jpieblog/2016/08/05/ie-%E3%81%8B%E3%82%89%E3%81%BF%E3%82%8B%E3%83%97%E3%83%AD%E3%82%AD%E3%82%B7%E3%81%AE%E8%A8%AD%E5%AE%9A%E3%81%AB%E3%81%A4%E3%81%84%E3%81%A6/


SCCM
クライアントの場合は、上記の後者、WinHTTP を利用して通信を行います。

SCCM クライアントは、主に SCCM サーバーとの通信で HTTP/HTTPS を利用しております。

ここで、注意すべきポイントといたしまして、ソフトウェアの更新ポイント サーバー (SUP WSUS の機能がインストールされているサイトサーバー、標準ポート番号 8530/8531) と通信する場合も、WinHTTP が利用されるという点がございます。

 

SCCM クライアントは、既定で 7 日ごとに、ソフトウェアの更新ポイントへ、ソフトウェア更新プログラムのスキャンを行います。

この際、SCCM クライアントは Windows Update Agent API を呼び出し、ソフトウェアの更新ポイントの WSUS へスキャンを行います。

この場合の動作は、通常の WSUS 単体でご利用いただいている場合の Microsoft Windows Update クライアント プログラムが行うスキャンの動作とは異なりますので、ご注意ください。

 

ご参考)Windows Update が利用するプロキシ設定について

https://blogs.technet.microsoft.com/jpwsus/2017/03/02/proxy-settings-used-by-wu/

 

ご参考)Windows Update クライアントが Windows Update Web サイトへの接続に使用するプロキシ サーバーを決定するしくみ

https://support.microsoft.com/ja-jp/help/900935/how-the-windows-update-client-determines-which-proxy-server-to-use-to-connect-to-the-windows-update-web-site

SCCM クライアントが利用するプロキシ設定についての理解に、お役立ていただければ幸いです。

 

 

- 免責事項

このドキュメントは現状有姿で提供され、 このドキュメントに記載されている情報や見解 (URL 等のインターネット Web サイトに関する情報を含む) は、将来予告なしに変更されることがあります。 お客様は、その使用に関するリスクを負うものとします。

Unable to access Crawl History from SharePoint Central Admin

$
0
0

Summary

Have you experienced an issue where "Crawl History" is inaccessible in your Farm and throwing error, "Could not find stored procedure 'Search_GetRepositoryTimePerCrawl'"?

If so, you may have some across this blog https://blogs.msdn.microsoft.com/sambetts/2014/12/10/sharepoint-2013-crawl-history-error/ which details how the store procedure is created and the issue could be that the timer job to complete the provisioning is disabled. In these cases, just enabling and stating the "Search Health Monitoring - Trace Events" timer job does the trick.

However, if this didn't work in your case, please keep reading for possible workaround...

Problem Description

Unable to access "Crawl History" with error, "Could not find stored procedure 'Search_GetRepositoryTimePerCrawl'"
and the "Search Health Monitoring - Trace Events" timer job is enabled.

 

Example:

 

Result:

Sorry, something went wrong

Could not find stored procedure 'Search_GetRepositoryTimePerCrawl'.

Technical Details

Correlation ID: 9f8f759e-2620-a083-a46d-e8b0cda512ca

Date and Time: 6/28/2018 10:30:04 AM

Cause

The "Search Health Monitoring - Trace Events" timer job unexpectedly fails to execute the provisioning process and the SQL changes are rolled back.

Resolution

Help it along you can force the provisioning process associated with the "Search Health Monitoring - Trace Events" Timer Job, by executring the following PowerShell commands.

 

 Add-PSSnapin microsoft.sharepoint.powershell -EA 0
 $diag = Get-SPDiagnosticsProvider -identity "Search Health Monitoring - Trace Events"
 $diag.OnProvisioning()

 

This process should force the initialization of the missing tables and stored procedures within the Search DB according to the definition of "Search Health Monitoring - Trace Events diagnostic provider.

More Information

Get-SPDiagnosticsProvider
https://docs.microsoft.com/en-us/powershell/module/sharepoint-server/get-spdiagnosticsprovider?view=sharepoint-ps

SPDiagnosticsProvider.OnProvisioning method
https://msdn.microsoft.com/en-us/library/office/microsoft.sharepoint.diagnostics.spdiagnosticsprovider.onprovisioning.aspx


Force Protected Apps or Devices | Conditional Access (3 of 4)

$
0
0

 

scenariosummary

 

We are back today for part two of our four part series on conditional access scenarios for success. Today, we will discuss how to restrict resource access from mobile devices unless they are managed by Intune (and compliant) or using an approved application (like Outlook mobile). You may want to protect your corporate data, but also want to balance the experience that end-users have while using these protected resources. To do this, customers can leverage Conditional Access rules to go through and secure email access, but give end users a choice on which mail client they would like to use.

 

Many users love using the native applications on their mobile devices to access email, while others may be fine using Outlook mobile instead. We can allow users to access email in the application they want while staying secure. Regardless of the choice your end users make, IT can rest assured that they will be accessing mail in a secure way. In this scenario, your users have two choices:

  • Use the native mail client, but enroll my device in to Microsoft Intune
  • Use Outlook mobile with Intune App Protection policies applied to secure the corporate data

This scenario enables users to securely access corporate data from their mobile device while giving them options; IT achieves the sweet spot of securing corporate resource access in a way that promotes positive end-user experiences.

 

Scenario Requirements

This scenario is simple to fulfill- all it requires is setting up a conditional access policy. That's it!

  • One Conditional Access policy
    • Policy: scoped to EXO/SPO, targets Mobile Apps and Desktop Clients for Modern Auth and Exchange ActiveSync, and requires devices be either Compliant or using the Approved Client App

With this single policy, we target both the modern auth and Exchange ActiveSync channel, ensuring that with either option a user chooses they will be protected (see Additional Options on how to secure Office 365 for how to protect third party apps).  This gives end users the flexibility they are looking for, while ensuring that corporate data remains secure.

 

Configuration Steps

  • Create the Conditional Access policy to require mobile devices either be enrolled or using an Approved App to access corporate data

 

capolicyconfiguration

 

Once enabled, this policy will do everything that you need in regards to setting this scenario up.

 

End User Experience

Let's take a look at how these policies impact the end user experience.

When a user tries to set up the native mail client on an iOS 11 device, they will see this message prompting them to enroll in to Intune:

 

blockmessagemodernauth

 

Since this is a modern auth client, we stop them before they can even finish setting up the mailbox on the device. The experience differs a bit using the legacy EAS authentication channel, which allows the mailbox to be set up, but quarantines the device in EXO and prompts the user to enroll. That message looks like this:

 

emailquarantinemessage

 

So these are the two prompts your end users may get when you set this scenario up, so make sure that you provide some communications to them on what to expect if they are trying to use the native mail client on their device.

If users are using Outlook mobile, they will be prompted to set up the "broker" app on their device based on their device platform. On iOS devices, they will need to install the Microsoft Authenticator app; on Android, they will need to install the Company Portal (so if you are using Intune App Protection for Android today, end users should already have this installed). Part of this process of installing the broker app is also registering the device with Azure AD. The broker app becomes the manager for connections to Azure AD/Office 365 and is in charge of determining that the application trying to connect to cloud services is indeed an approved application. You can read more about this here: https://docs.microsoft.com/en-us/intune/app-based-conditional-access-intune

 

Additional options to secure Office 365

We have also recently had the option arrive in Conditional Access to block legacy authentication. Until this was available, Conditional Access only worked with modern authentication and EAS clients. We can now block all traffic coming in to Office 365/Azure AD with Conditional Access (including Exchange Web Services, SMTP, POP, IMAP, etc). We strongly recommend you create a simple policy in Conditional Access to target "Other Clients" and block that traffic. This ensures that legacy mail clients using other connection options than modern auth or EAS will be blocked. You can read more about this new functionality here: https://cloudblogs.microsoft.com/enterprisemobility/2018/06/07/azure-ad-conditional-access-support-for-blocking-legacy-auth-is-in-public-preview/

 

In Review

Scenario Goal: Protect corporate data on mobile device while giving users a choice on how they want to use their mobile device

Scenario Scope: iOS/Android

Recommended when…

  • Customers are concerned about protecting mobile access to Office 365
  • There is an end-user population who uses the native applications today
  • Customers want to provide options to end-users in how they access Office 365 data

In the next post of this series, we will shift our focus to how we can ensure users are accessing web content via the Managed Browser instead of the native device browsers. Have more questions about securing mobile device access to Office 365? Have you tried out these conditional access scenarios? Let us know in the comments below!

 

-Josh and Sarah

Cloud Platform Release Announcements for June 27, 2018

$
0
0

Azure Data Lake Storage Gen2 in preview

Azure Data Lake Storage Gen2 is a highly scalable, performant, and cost-effective data lake solution for big data analytics. Azure Data Lake Storage Gen2 combines the power of a high-performance file system with massive scale and economy to help you speed your time to insight. It extends Azure Blob Storage capabilities and is optimized for analytics workloads. Store data once and access via existing Blob Storage and HDFS compliant file system interfaces with no programming changes or data copying. Azure Data Lake Storage is compliant with regional data management requirements.

Azure Data Lake Storage Gen2 adds a Hadoop compatible file system endpoint to Azure Blob Storage and delivers the following capabilities:

  • Limitless storage capacity.
  • Support for atomic directory transactions during analytic job execution. This means that analytics jobs will run faster and require fewer individual transactions, thus leading to lower costs for Big Data Analytics workloads.
  • Fine grained, POSIX compliant ACL support to enable granular permission assignments for Data Lake directories and files.
  • Availability in all Azure regions when it becomes generally available.
  • Full integration with Azure Blob Storage.

Azure Data Lake Storage Gen2 will support all Blob tiers (hot, cool, and archive), as well as lifecyle policies, Storage Service Encryption, and Azure Active Directory integration. You can write data to Blob storage once using familiar tools and APIs and access it concurrently in Blob and Data Lake contexts.

To learn more about Azure Data Lake Storage, please visit our product page.

Azure IoT Edge | GA

Announcing the general availability of Azure IoT Edge, a fully managed service that delivers cloud intelligence locally by deploying and running artificial intelligence (AI), Azure services, and custom logic directly on cross-platform IoT devices. With general availability (GA), we are introducing several new features and capabilities, including:

  • Open source release of IoT Edge runtime.
  • Support for Moby container management system.
  • Zero touch provisioning of edge devices with Device Provisioning Service .
  • Security Manager with support for hardware-based root of trust for allowing secure boot strapping and operation of Edge.
  • Scaled deployment and configuration of Edge devices using Automatic Device Configuration Service.
  • Support for SDKs in multiple languages, including C, C#, Node, Python and Java (coming soon).
  • Tooling for module development including coding, testing, debugging, deployment—all from VSCode.
  • CI/CD pipeline using Visual Studio Team Services.

Azure services supported on IoT Edge include:

To learn more, read the announcement blog.

Azure App Service | Managed Service Identity—GA

Managed Service Identity gives Azure services an automatically managed identity in Azure Active Directory (Azure AD). You can use this identity to authenticate to any service that supports Azure AD authentication, including Key Vault, eliminating the need to manage credentials on your own.

Learn more.

Azure Logic Apps | Generally available in China

Azure Logic Apps is now generally available in China.

Logic Apps delivers process automation and integrates applications and data across on-premises, public, or private cloud environments.

Logic Apps enhance productivity with business processes automation, EAI, B2B/EDI, as well as services and applications integration using most common out-of-the-box connectors for Azure services, Office 365, Dynamics CRM, and other services.

Learn more about Logic Apps.

Azure Search | Auto complete and synonyms in preview

New query features in Azure Search

Azure Search has two new features now generally available. The auto complete API feature searches an existing index to suggest terms to complete a partial query. The synonyms functionality feature allows for Azure Search to not only return results which match the query terms that were typed into the search box, but also return results which match synonyms you have defined of the query terms.

Learn more about Azure Search.

Azure SQL Database | Data Sync—GA

Azure SQL Data Sync general availability Azure SQL Data Sync provides unidirectional and bidirectional data synchronization capabilities between Azure SQL Database and SQL Server endpoints deployed anywhere in the world. Manage your data sync topology, schema, and monitor the sync progress centrally from the Azure portal. Azure SQL Data Sync also provides a stable, efficient, and secure way to share data across multiple Azure SQL Database or SQL Server databases.

For more information, visit the Azure blog.

Azure SQL Database | Storage add-ons now available

Storage add-ons now generally available in Azure SQL Database

Now generally available, storage add-ons allow the purchase of extra storage without having to increase DTUs or eDTUs. Purchase extra storage for performance levels S3–S12 and P1–P6 databases up to 1 TB, for smaller eDTU premium elastic pools up to 1 TB, and for standard elastic pools up to 4 TB.

Learn more about these add-on storage options on the Azure blog.

Azure SQL Database | Zone Redundancy—GA

Zone redundant configuration for premium service tier of Azure SQL Database now generally available.

Announcing the general availability of zone redundant premium databases and elastic pools in select regions. The built-in support of Availability Zones further enhances business continuity of Azure SQL Database applications and makes them resilient to a much larger set of unplanned events, including catastrophic datacenter outages. The supported regions include Central US and France Central with more regions to be added over time.

Learn more.

Azure Event Hubs |Availability Zones support in preview

Availability Zones support for Event Hubs now in preview

With Azure Availability Zones support for Event Hubs, you can build mission-critical applications with higher availability and fault tolerance by using cloud messaging between applications and services.

Azure Availability Zones support for Event Hubs provides an industry-leading, financially-backed SLA with fault-isolated locations within an Azure region, providing redundant power, cooling, and networking. The preview begins with Central US and France Central, and is available to all Event Hubs customers at no additional cost.

Learn how to explore Azure Availability Zones support for Service Bus.

Azure Database for MySQL and Azure Database for PostgreSQL (open source database services) | Gen 5 new regions—GA

Azure Database for MySQL and PostgreSQL: Extended regional availability and memory optimized pricing tier

Azure Database for MySQL and Azure Database for PostgreSQL availability has been extended to the following regions; Central US (Gen4), North Central US (Gen5), France Central (Gen5), East Asia (Gen5), India Central (Gen5), India West (Gen5), and Korea Central (Gen5). You can now create and switch to the new memory optimized pricing tier, which is designed for high-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency.

Azure SQL Database | Elastic Jobs in preview

Elastic Database Jobs preview now available for Azure SQL Databases
Now available in preview, Azure Elastic Database Jobs is a fully Azure-hosted service that's easy to use for executing T-SQL based jobs against group of databases. Elastic jobs can now target databases in one or more Azure SQL database servers, Azure SQL elastic pools, or across multiple subscriptions. Elastic jobs can be composed of multiple steps and can dynamically enumerate the list of targeted databases as additional databases are added or removed from the service.

Learn more on the Azure blog.

Azure SQL Database | Resumable index creation in preview

Resumable online index create feature of Azure SQL Database in preview

The resumable online index create (in preview) feature lets you pause an index create operation and resume it later from where the index create operation was paused or failed. With this release, we extended the resumable functionality by adding this feature to the resumable online index rebuild feature as well.

Learn more.

Azure Dev Spaces | Preview

Imagine you are a new employee trying to fix a bug in a complex microservices application consisting of dozens of components, each with their own configuration and backing services. To get started, you must configure your local development environment so that it can mimic production, then set up your IDE, build tool chain, containerized service dependencies, a local Kubernetes environment, mocks for backing services, and more. With all the time involved setting up your development environment, fixing that first bug could take days. With Azure Dev Spaces, a feature of Azure Kubernetes Service (AKS) now in preview, the process can be drastically simplified.

Using Azure Dev Spaces, all a developer needs is their IDE and the Azure CLI. Azure Dev Spaces provides a rapid, iterative Kubernetes development experience for teams. With minimal machine setup, developers can iteratively run and debug containers directly in AKS, even in complex environments. Teams can share an AKS cluster to collaboratively work together, with each developer able to test end-to-end with other components without replicating or mocking up dependencies. They can also use Dev Spaces to develop on the OS of their choice—Windows, Mac, or Linux—using familiar tools like Visual Studio, Visual Studio Code, or just the command line.

Learn more.

Improved user experience for navigation in Visual Studio Team Services

Announcing the preview of an improved navigation user experience (UX) for Visual Studio Team Services. The goal of this new experience is to give users a clean and modern looking task-focused navigation while enabling more functionality. It also allows customers to decide on how much complexity they would like to expose to their users by enabling or disabling parts of Visual Studio Team Services like version control or build. It also includes other improvements in notifications and homepage. Additional improvements will be coming soon.

For all the details on what’s new with this release and to learn how to turn it on for testing, see our detailed blog post.

Azure Active Directory (Azure AD) | Password protection in preview

One weak password is all a hacker needs to get access to a corporation’s resources. With Azure AD password protection, you can now secure against this vulnerability. This security feature within Azure AD has capabilities such as banned passwords and smart lockout, and delivers on a hybrid promise by extending the protection to identities in the cloud and on-premises.

Banned passwords enables you to both restrict users from setting common passwords such as “password123”, and also to define a custom set of passwords such as “companyname123”.

Additionally, you can also set policies to define password complexity that they want to enforce from a security or compliance standpoint. Also a part of password protection, smart lockout enables you to set policies on the number of times a user gets to fail authentication and subsequently locked out.

With Azure AD password protection, you can bring together the power of cloud-powered protection and flexible policy definition, as well as protect against password spray attacks on your corporate resources.

To learn more, view the full blog post.

Get started today by trying out this preview for yourself.

Azure AD conditional access VPN connectivity | GA

Announcing the general availability of the support of Azure AD conditional access for Windows 10 VPN clients. With this feature, the VPN client is now able to integrate with the cloud-based Conditional Access Platform to provide a device compliance option for remote clients. This allows conditional access to be used to restrict access to your VPN in addition to your other policies that control access based on conditions of user, location, device, apps and data.

Get started today and learn more by visiting our documentation website.

Azure AD conditional access | What If GA

Announcing the general availability of the Azure AD conditional access What If tool. As you continue to create multiple policies within conditional access, the What If policy tool allows you to understand the impact of your conditional access policies on your environment and users. Instead of test driving your policies by performing multiple sign-ins manually, this tool now enables you to evaluate a simulated sign-in of a user. The simulation estimates the impact this sign-in has on your policies and generates a simulation report. The report does not only list the applied conditional access policies, but also classic policies if they exist. This tool is also handy to troubleshoot when a particular user will be affected by a policy.

Get started today with this tool and visit our documentation site to learn more.

Microsoft Developer Kits for Windows Mixed Reality and Project Kinect for Azure

$
0
0

Did you know that you can apply for the Windows Mixed Reality and Project Kinect for Azure development kits?

If you have a project you would like to build using Project Kinect for Azure or  Windows Mixed Reality, you can be selected to receive the development kit for the respective program.

Follows the links to apply to the program:

For Mixed Reality: 

https://iwantmr

For Project Kinect for Azure:

https://aka.ms/iwantkinect

Please don't hesitate in contact us in case you need anything.

Adopt faster using Play Sessions

$
0
0

I have always struggled with learning new topics from online videos. Videos are definitively helpful, and I know a lot of people that learn how to cook, earn IT certifications, or even fix a car just by watching videos, but not me.  I prefer classroom training because I get to learn from someone in person and I get the chance to practice, ask questions, and make mistakes.  The more mistakes I make, the more expertise I gain about the topic, because I know what can go wrong and how to troubleshoot when problems arise.  This is why I have always believed that to have better adoption rates employees should become experts in O365 workloads and we should train our people to become trainers and let them train stakeholders or champion groups with practice sessions.

 

Some time ago, I started an experiment with my colleague to create an adoption whiteboard session for the FastTrack Center in Las Colinas.  We had the idea of having FastTrack Managers and FastTrack Engineers meet in person in a conference room to learn new adoption topics, so we could better explain adoption trends to our customers.  I then mentioned the idea of what we call now Play Sessions.  The name might sound silly but let me explain.

 

One of the first meetings included a “Play Session” was for MS Teams. We decided to invite a Teams SME to do a demo and an FM to present a customer facing deck for the first half of the session.  People loved it, and although they didn’t ask any questions, they seemed to understand. For the second half of the session we created a game, a Play Session, and that's when things got get interesting.

I created a list of things to complete in 30 minutes, we created a test team in MS Teams, a scoreboard, and then grouped people in pairs.  They had to complete all the tasks from the Play Session, and every time they finished one they had to run to the scoreboard, and they would earn one point. The pair that finished first would win our first Play Session.

Then… something amazing happened!

Everybody started asking questions, talking amongst each other, and running to the scoreboard.  We discovered that even the FastTrack Engineers we thought understood the first half of the meeting didn’t understand how to do some things that the SME talked about in his demo. At the end of the session, everyone was excited to learn more about other workloads in future sessions.   In the survey, participants mentioned that they felt more prepared to talk about MS Teams to customers and do this same session in their demos.  People that were not using Teams started to use it often.   They all practiced, made mistakes, learned, and had fun at the same time.

We have many great resources to share for O365 learning, but some FMs are going an extra mile and generating play sessions with their customers because they want them to adopt more workloads and to adopt them faster.

Here is what we did in the play session:

Instructions:

  1. Navigate to MS Teams.
  2. You are already invited as an owner to our Play Session Team called “ Teams- Play Session.”

Follow these instructions.  Each time your team finishes a task, go to the whiteboard and check the task that you completed.

 

Task Description
1 Navigate to the Team “Teams- Play SESSION” and show how you feel right now with a GIF.
2 Create a Channel under that Team and name it with an original name.
3 Install these applications:  Polly, Planner, One Note, and Power BI in the Channel that you just created.
 4 Click the Files tab in the Team and edit the document called “TESTING 1 2 3.”  Share with us what you thought about the SME session.
5 Navigate to the team and @mention someone.
6 Navigate to the Store and install the “Growbot” app and send a Kudo to your team member in your Channel.
7 Send a Kudo with Growbot in the Team.
8 Create a poll in the Team using “Polly,” make sure to answer other team polls.
9 Navigate to Outlook and send an e-mail to the Teams e-mail address.
10 Create a Teams meeting from Outlook for today at 9 P.M. and send it to your other Team member.
11 Find for a GIF of your favorite movie and post it in your Team.

 

Why don’t you try doing an exercise like this for your group or team?  The adoption process will quicken, they will learn about the workload, and will be entertained at the same time.

Collaborators of the play session: Camille Jimenez (Relationship Manager), Priya Vanka (Relationship Manager), Alicia Sanchez (FastTrack Manager), Alejandro Lopez (FastTrack Engineer – Teams SME)

 

Configuration Manager – Setting up Cloud Services using Wildcard Certificates

$
0
0

Hello all,

I wanted to take a second to introduce a new contributor to the blog, Matt Toto.  Matt and I have known each other for about 5 years and have even teamed up on some customer engagements recently.  I asked Matt if he'd like to bring his expertise to this blog and he graciously agreed.  With that, take it away Matt....

===================================================================================================================

Hi Everyone!  This is Matt Toto, I'm a ConfigMgr PFE focused on Cloud Services.  In this article I'll be sharing how to use a wildcard certificate for setting up both the Cloud Management Gateway and Cloud Distribution Point.  Support for this capability was added to Configuration Manager in 1802.

Using a Wildcard Certificate to Create Cloud Service is Configuration Manager (CMG and CDP) has a lot of benefits.  It reduces cost and maintenance of PKI.  This single wildcard cert can be used as a Management Certificate, if using Classic Deployment Model.  As well as be used to create potentially unlimited CMG's and CDP's.  The process is quite simple.  Let's get started!

 

Step one is to obtain a wildcard cert for your domain.  For example, *.contoso.com, using either Internal PKI or Publicly Provided.  In my lab I use a public cert, provided by DigiCert, for an ARM CMG, this example is based off that configuration.

Next you'll need to create a Cloud Management Gateway in Configuration Manager.  On the General tab, sign in as an administrator account to provide Configuration Manager with access to your subscription info.

 

 

On the Settings Page of the Wizard specify your wildcard certificate, enter the password.  You will receive the following prompt, informing you about the Common Name (CNAME) of the certificate having a wildcard.  Its ok, we'll fix that in a bit.  For now, just click OK, ok?

 

 

Initially your screen will look something similar to this.  Note that its telling you the name cannot contain special characters.  Which, for the moment, it does.

 

 

This is where you come in!  Notice the Service FQDN box?  Yes, it does look unhappy with that red SPLAT.  But, it also looks like you can type in that box, right?  Normally you cannot enter text here because it is auto-populated, based on the CNAME field in the certificate.  In this case, because it’s a wildcard, you actually NEED to type a unique name here.

 

 

 

Go ahead, type something unique.  All you have to do is come up with a unique name, enter it in the box, then click out of the box.  Once you do that the Service Name box will display the name provided and you can continue with the setup.  Like so…

 

 

After finishing out the Wizard, the service will be provisioned with that as its 'Service Name' and the Cloud Service Name will be appended with .cloudapp.net

 

 

Now that the service is provisioned, you'll need to update DNS.  Add a CNAME that maps the Service Name (which is the name that your SCCM Client will try to resolve) to the Cloud Service Name which will be, in this example, UniqueName.cloudapp.net.

 

 

 

That's it!  Support for the wildcard certificate is a game changer for setting up the Cloud Management Gateway and Cloud Distribution Point in Azure!

Viewing all 36188 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>