Search results for: Free PDF Quiz 2024 High Hit-Rate EMC D-PM-IN-23 Latest Test Report 🍂 Search for ✔ D-PM-IN-23 ️✔️ and download exam materials for free through [ www.pdfvce.com ] 🦞Questions D-PM-IN-23 Exam
-
Data Mining For Geotechnical And Mining Engineers
It is now quite customary to arrange any set of data in a computer spreadsheet where the rows represent different cases or tests and the columns represent values of the parameters measured or calculated for each case. However, in some cases, we find it very hard, or even impossible, to find patterns or models for a set of data. Usually the degree of difficulty increases with both the number of data parameters (number of columns) and the number of data points (number of rows). One is quite likely to face or deal with a data, which is somehow or somewhere biased, deficient or inaccurate. This is usually a most disturbing difficulty, as any model incorporating such deficiencies or inaccuracies contains noise. Such models cannot be useful until they pass through an appropriate filtering process. But, most often even this judgmental recognition of the bad data is also unknown and needs to be discovered, investigated and justified.
Successful models are the results of good, reliable and accurate data. For example, suppose we haven’t discovered the well-known relations V = IR and P = RI² in electricity, but, after some experiments, we have measured the currents: I = 10, 20, 30A, the resistances: R = 5, 10, 15Ω, the voltages: V = 50, 200, 450V, and the powers: P = 0.5, 4 and 13.5 kW. From this “good” data, one can easily derive the relationship between the power (P) and the resistance (R) and the current (I) leading to the well-established equation: P = RI², although the three parameters (I, R, V) are not even independent and are potential sources of noise creation if the data were inaccurate, biased or disturbed. Now, it is left to the reader to try to disturb this data in any wishful fashion to experience the extreme difficulties involved in rediscovering the above relationships, which are even not unknown to us any more.
The problem is that most often we do not know how good or bad or efficient or deficient is our data. These are perhaps sufficient reasons to remind us that any experimental data need to be validated professionally before it can be used in data mining for a predictive modelling process.
Data mining is the process of extracting a category, pattern, or model from an existing data for predicting either another existing data or a non-existing data. Linear regression, i.e. fitting a simple line to a set of data, is the simplest data mining method. In this case there is no interactions between the independent parameters. Any curvature in the line is a sign of either non-linearity or interactions among various parameters. Traditional statistical regression models are not appropriate for discrete, descriptive, or item-based data. For example consider data points (e.g. percentage of the time for selling one shopping item (dairy) versus that of other items (meat) together in a supermarket) distributed as two distinctive regions or clusters in a x-y coordinate system. In this case, a classical mathematical regression function cannot represent such discontinuous sets or clustering behaviour. For discontinuous or item data, methods of cluster analysis and decision trees are the two common techniques used to form subsets, groups or categories with common behaviour/properties.
There are two categories for data mining, namely the inferential and the non-inferential techniques. Hypothesis testing and inference from sample to population are the main features or framework of the inferential data mining. Inferential techniques have their foundations on the traditional statistical theories. Discriminant analysis or group regression, linear regression, analysis of variance, logistic and multinomial (categorical) regression and time series analysis, all belong to this category. The key difference between inferential and non-inferential techniques is in whether hypotheses need to be specified before hand. In the non-inferential methods this is not normally required, as each is semi or completely automated as it searches for a model. Cluster analysis, market basket/association analysis, link analysis, memory based reasoning, decision trees and neural networks, all belong to non-inferential techniques. There is no predefined outcome in either of the cluster analysis, decision trees and market basket techniques, as all three use categorical or continuous predictors to create cluster, tree or basket (association) memberships for various data points. Linkage is created between sets of items in the link analysis, while all types of data (including text) can be entered in the memory-based reasoning technique to predict an outcome. Amongst all these, neural network is one of the most popular and powerful techniques that can use both categorical and continuous predictors to predict categorical and continuous outcome variables. The reader may refer to Berry et al (1997) for more information on all these models.
Inferential and neural networks are perhaps the most relevant techniques to almost all engineering disciplines. Skipping the inferential methods (Alehossein and Hood, 1996), we elaborate on the neural networks.
-
Mechanical behaviour of hydrated cement treated crushed rock base (HCTCRB) under repeated cyclic loads
This paper aims to report the mechanical behaviour of hydrated cement treated crushed rock base (HCTCRB) as granular road base material subjected to repeated cyclic loads from Repeated Loads Triaxial (RLT) tests with various stress paths in order to improve more understanding of such Western Australian roads based materials on mechanistic-empirical pavement design and analysis. As known, pavement surface rutting, longitudinal and alligator cracks are normally the main cause of damage in flexible pavements. Factors contributing to such damage are the excessive irreversible and reversible deformation of a base layer including the behaviour of a mechanical response of unbound granular materials (UGMs) under traffic load is not well understood. In this study, the shakedown concept was utilized to describe and determine limited use of HCTCRB subjected to different stress conditions. The concept is the theoretical approach of the UGMs used to describe the behaviour under RLT tests. The shakedown concept utilizes macro-mechanical observations of the UGM’s response and the distribution of the vertical plastic strain in the tested material. While the shakedown limit of an UGM is known, whether the limitation of the accumulated plastic strain in an unbound granular layer causing rutting in pavements can be predictable. In this paper, compacted HCTCRB samples were subjected to the various stress condition defined by the stress ratio (the ratio of a vertical major stress, σ1 and a horizontal minor stress, σ3) in order to simulate the real condition of pavement. The study reports that HCTCRB was defined the working stress ratio of 11 in pavement structure and will be achieved stable state at the large number of load cycles. Moreover, the mechanical responses were investigated and the limit ranges of using HCTCRB in pavements were determined.
-
Assessing the Geometry of Defect Waviness from Borehole Data
Within the large, open cut, iron ore mines of the Pilbara region of Western Australia, defect shear strengths often control the slope design where bedding dips shallowly to moderately out of the pit slope. The presence of metre to decametre scale open folding or waviness in these units can contribute to the friction angle of bedding shear strengths, potentially allowing for steeper slope angles and improved economics for the deposit.
Traditionally, waviness affecting defect shear strength is assessed from surface mapping, bench mapping or qualitatively from observations in core. Surface mapping of bedrock is often not possible due to detrital cover or a lack of suitable outcrop, while bench mapping is inherently conducted perpendicular to the direction of sliding risk. The use of downhole data from boreholes drilled into the slopes circumvents these issues.
The method presented here involves assessing characteristic downhole wavelength, inter-limb angle and amplitude of folding from defect orientation data interpreted from borehole televiewer imaging. The downhole wavelength and defect orientations are transformed to a true down-dip wavelength, dilation angle, and estimated amplitude in the direction of sliding risk. The calculation of down-dip wavelength is critical for assessing the applicability of the associated dilation angle to the scale of the slope and failure mechanism in question. The adoption of defect shear strengths that include a waviness contribution to the friction angle allows for implementation of steeper slope angles in structurally controlled slopes.
-
AGS QLD AGM & Annual Dinner
Dr Chris Browitt & Ms Alice Walker, British Geological Survey
-
Rockfall risk and remediation on the Lake George Escarpment
In 2005 a rockfall occurred on the Lake George Escarpment, north of Canberra in New South Wales causing rocks to be strewn across the Federal Highway. Although vehicles were not hit by the fall, a significant traffic accident ensued as vehicles attempted to avoid the debris on the highway. The escarpment had not been previously recognised as a site prone to rockfall with no recorded or published history of rockfalls. A subsequent investigation involved the use of photogrammetry, geomorphological and geotechnical mapping, and computer rockfall simulations to assess both the extent of rockfall hazard across the escarpment and the risk to road users. The assessed risks at the site exceeded the tolerable risk level stipulated by the NSW Roads and Traffic Authority (RTA). Slope remediation works involved the partial removal of the rockfall hazards by blasting and manual removal in conjunction with the construction of a series of rockfall fences to reduce rockfall risk to tolerable levels. This paper provides a commentary on rockfall trials filmed with high-speed motion cameras that were conducted to calibrate rockfall simulation software and assess the fence requirements. The paper also provides insight into natural hazards with the potential to affect transport corridors that should be considered at route selection stage.
-
Protecting the environment with geosynthetics
Terzaghi Lecture 2018
-
Jacked end-bearing piles in the soft alluvial sediments of Perth
Precast concrete piles jacked through soft alluvial sediments and onto bedrock or dense gravels have been employed as foundations for two recent large projects located on the Perth foreshore. A static test performed on a 36.7 m deep instrumented pile at one of these sites is described in detail in this paper and is used subsequently to assess characteristics of pile shaft friction in the soft sediments as well as those of end-bearing in the bedrock. The instrumented pile test results are also compared with established design correlations for other similar ground conditions.
-
Effective stress versus total stress analysis of undrained problems in geotechnical engineering
Stability and strength analysis in geotechnical engineering can be carried out in terms of either effective or total stresses. Given fundamental knowledge of soil mechanics and clear understandings of numerical modelling, the numerical simulations should result in consistent outcomes from both approaches. Undrained excavations were modelled in Abaqus, a software application for finite element analysis. The Extended Modified Cam Clay model was used to characterise the soil behaviour in the Effective Stress Analysis (ESA) and the Tresca model was used in the Total Stress Analysis (TSA).
For the comparison between ESA and TSA to be valid, it is critical for both analyses to represent identical soil conditions and characteristics. Therefore, the fundamental part of the procedure was to derive the values of total stress parameters from the effective stress parameters and numerical outputs from ESA. In order to confirm the precise match of soil conditions between ESA and TSA, initial stress distributions and initial values of K0 were compared.
The shape of yield surface in ESA was modified to minimise the difference in the yield surface between ESA and TSA. The values of su were also adjusted to reflect the shear strength in the plane strain problem. While those modifications improved results, most of the numerical outputs showed inconsistencies between ESA and TSA. By comparing the maximum values of forces and moments of structural elements, neither method produced results that were consistently greater than the other method throughout all excavation scenarios. It was justified that the differences in the structural forces and moments were mainly due to the differences in the passive stress on the retaining wall between ESA and TSA. The observations on the stress paths of passive soil elements revealed that the passive soil for all ESAs did not reach the critical failure state, and for TSAs the soil reached the failure for cantilever problems, but not for propped excavations.