Listing 1 - 10 of 17 | << page >> |
Sort by
|
Choose an application
The steadily increasing use and standardisation of Building Information Modelling (BIM) provide opportunities for the development of optimised workflows for the design and operation of very energy efficient buildings.This PhD thesis proposes a new and flexible approach for the direct coupling of openBIM (IFC4) and Building Energy Performance Simulation (using Modelica). The proposed method uses the Information Delivery Manual (IDM, ISO 29481) framework to formally define the different exchange requirements encountered during the various building stages and especially for the design. These exchange requirements are then technically defined through the implementation of custom Model View Definitions (MVDs), which are derived from existing and established ones (e.g. buildingSmart's Design Transfer View MVD). Lastly, building information modelling guidelines are proposed to ensure the compatibility between the exported IFC4 model and the previously defined exchange requirements. A strategy to verify and guarantee the agreement between the exported IFC4 and the custom MVDs is also proposed.Such expandability and flexibility of the entire openBIM framework (IDM-MVD-IFC4) combined with the possibility to create custom Modelica libraries are used in this study to create a flexible tool-chain that could be easily adapted to generate different simulation models with variable complexities; each model being compatible with a specific stage of the building lifecycle.Furthermore, the developed BIM-based workflow and tool-chain is extended to allow the efficient integration and --partially-- automate the implementation of Fault Detection and Diagnosis (FDD) strategies in building systems. Also, a strategy that extends the proposed method to --partially-- automate the creation and selection of reduced order models -- often used in Model Predictive Control (MPC)-- with different levels of detail is developed.The smooth and broad integration of these energy-saving strategies (e.g. FDD and MPC) in the built environment require the possibility to develop an interoperable, adaptable, flexible Building Automation and Control System (BACS) altogether at a relatively low cost. This study evaluates the potential of open technologies (i.e. the combination of open communication protocol standards, open software tools and open hardware devices) in fulfilling these conditions.Applied on an existing test facility, the results emphasise that the direct coupling between the openBIM framework and Modelica results in a flexible workflow that can significantly improve the application of building energy simulation during the different design stages, as well as the integration of energy saving strategies such as FDD and MPC at the operational phase.
Choose an application
Duplex stainless steels combine excellent strength and corrosion resistance, thus leading to low maintenance costs and longer structure's lifespan. Lean duplex grades i.e. duplex grades with a lower Ni content in mass, have gained popularity in the last two decades thanks to significantly lower initial cost. These grades are presently used in structures exposed to harsh environments combined with cyclic loads (therefore, subjected to fatigue) such as bridges in coastal areas. Yet, the fatigue resistance of lean duplex grades has so far been the subject of very little research, mostly dealing with material characterization. The present doctoral research studies the fatigue behaviour of lean duplex grades. It investigates the base materials as well as welded connections submitted to cyclic loading through experiments. It puts it in perspective with existing research on high strength (carbon and stainless) steel grades, with the ultimate goal being to adapt the current fatigue design rules for these grades.Firstly, the technical feasibility of using the studied lean duplex grade in a welded highway girder bridge is discussed via comparative studies of 3 design options made of mild carbon steel S355, high strength carbon steel S460 and duplex stainless steel EN 1.4162. For higher strength grades, fatigue appears to be the controlling design criterion due to the cross-sectional areas being slightly thinner leading to higher equivalent stress ranges. Three details are shown to be the most critical ones with the maximum applied design load over resistance ratios: transverse stiffeners, cope holes and full-penetration butt welds in the flanges. But when the possibilities to upgrade the detail categories for the cope holes and butt welds are considered, the transverse stiffener becomes the governing one. Economic justification of the use of EN 1.4162 grade is carried out through comparative life cycle cost assessment.Secondly, the results of an experimental campaign carried out during this project investigating the fatigue behaviour of welded non-load carrying transverse attachments and unwelded base plates made of duplex EN 1.4162 grade are presented. The test results are assessed according to the nominal stress method and hot spot stress method. The present guidelines for the hot spot stress method state that it can safely be applied to stainless steels, however, only the austenitic grades. Therefore, the applicability of the method to welded duplex stiffeners is presently checked. When the nominal stress method is considered, the fatigue strength curve (SN-curve) obtained with codified fixed slope equals to 3.0 shows that the current Eurocode predictions are slightly conservative. A greater slope is deemed more appropriate for the experimental results, but in that case, the Eurocode detail category becomes too conservative. The hot spot stress method on the contrary gives less conservative predictions of the fatigue strength for the duplex welded detail, also with a lower scatter band among the evaluated data population of the SN-curve. A comparison between the measured stress concentration factors (via strain gauges and digital image correlation) and the computed ones (via finite element method) also reveals very consistent results, thus enabling us to draw positive conclusions on the applicability of hot spot stress method on the studied duplex specimen.In order to highlight further the applicability of the current Eurocode provisions on higher strength steels subjected to fatigue, a broader database of fatigue test results is then collated. It includes more than 300 fatigue test results for transverse stiffeners, more than 150 test results for butt welds and more than 50 test results for cope holes. Each database is then re-evaluated at the nominal stress and hot spot stress ranges via finite element analyses. This is also to prove that the hot spot stress method is applicable beyond the scope of lean duplex grades. On one hand, it is seen that the current Eurocode detail category remains too conservative for the transverse stiffeners and butt welds, while unconservative for cope holes. Hence, proposals are made for these detail categories to adapt the current rules so as to be more efficient in fatigue design for higher strength metals. On the other hand, it is then shown that the hot spot stress method yields more representative predictions in general, for the whole range of grades considered in the database.
Choose an application
In the my PhD project, we are going to improve the computational efficiency of probabilistic hygrothermal assessment mainly based on two approaches. The first approach focuses on the core model itself and aims at reducing the computation time for a single deterministic simulation. In this project these core simulation models are mainly about wall models which simulate the hygrothermal behavior of building materials and components in multi-layer walls. Several one-(1D) , two or three-(2 or 3D) dimensional models can be found in the literature. However, the application of these models is usually very time consuming due to the high degrees of freedom after the spatial and temporal discretization. Instead of these original models, Van Gelder et al used statistical surrogate models (such as polynomial regression model, Kriging etc.) to reduce the simulation time. However, since these statistical surrogate models can only deliver static results, surrogate models that allow mimicking the dynamic behavior (such as time evolution of temperatures, ...), need to be developed. In order to lower the computational complexity and obtain the dynamic behavior of the original model, model order reduction (MOR) methods are usually used. Through model order reduction, a large original model is approximated by a reduced model and the solution of the original system can be recovered from the solution of the reduced model.The second possible approach is going to restrict the number of needed repetitions of the core deterministic model in the framework of Monte Carlo method, which is the tool applied for estimating the probability distribution of the output parameters, and the current state-of-the-art in the Monte Carlo Method is based on a replicated optimized Latin hypercube sampling strategy. Optimized Latin hypercube sampling is a sampling strategy which divides each parameter into n intervals then makes sure that only a single sample is placed in each interval. Even though Optimized Latin hypercube sampling has a good convergence rate ( 1/n), since it is a variance-reduction method it becomes difficult to monitor it's convergence. In order to make convergence monitoring possible, replicated Latin hypercube sampling has been presented by Janssen, which uses permutated repetitions of smaller designs to reach the set number of runs n instead of single n-run Optimized Latin hypercube design. As a consequence, it allows evaluating the variances on the Monte Carlo outcomes which in turn permits halting the calculation when the desired accuracy levels are reached. However, the main drawback of this methods is, it does not converge as fast as normal optimal Latin hypercube designs. Another sampling design approach is to use low-discrepancy sampling designs to create the input variables of the Monte Carlo framework. Singhee [6] showed that low discrepancy sampling designs can often be a better choice compared to both simple random sampling and Latin Hypercube Sampling method due to it's lower variance, faster convergence and better accuracy. This result motivates us to study the application of sequential sampling method based on a low discrepancy design for improving the efficiency of Monte Carlo analysis.
Choose an application
Restoration and maintenance of coastal beach and dune systems requires knowledge of aeolian sediment transport processes for the prediction of system response to wind forces over short to long-term timescales. This allows for appropriate management of storms in addition to seasonal and decadal variations. Accurate aeolian sediment transport equations are of utmost importance for modern geomorphology and coastal engineering practices. Although sand transport by wind is easily observable, reliable and accurate data sets of sand transport rates are of scarcity due to measuring difficulties. Field monitoring is essential to understand its impact in the overall sediment budgets and long-term coastal dune behavior. The main objective of this thesis is unravelling the nature of aeolian sand transport on the Belgian coast, enlarging the knowledge with the aim to improve long-term aeolian sediment transport estimates. The overall aim of the thesis is based on analyzing collected data from field experiments carried out between 2016 and 2018. This work is performed within the framework of the project CREST (Climate REsilient coaST), funded by the Strategic Basic Research (SBO) program of Flanders Innovation & Entrepreneurship, Belgium. Within the project, it is aimed to further the knowledge of coastal processes on land and under water.To gain initial insight into the relationship between aeolian sand transport rates and wind speed, simultaneous monitoring of meteorological conditions and aeolian sand transport rates, using Modified Wilson And Cook (MWAC) sand traps, was carried out on the subaerial beach of two study sites in Belgium. The study sites comprise the natural beach-dune system of Koksijde and the managed beach-dyke system of Mariakerke. Six aeolian mathematical models, each predicting saturated transport rates, are used for objective testing. Some of the models are frequently used for long-term budget calculations. Recently, new models have been proposed in literature that require validation from qualitative field measurements. The key parameter in all these aeolian models is the shear velocity, u*. Shear velocities are calculated using vertical wind profile data from meteorological stations located on the beach. A modified Bagnold model was able to produce a strong one-to-one relation between observed and predicted transport rates. The other aeolian models produced poor results, underestimating and/or overestimating sediment transport rates.While short-term aeolian sediment transport rates and wind speed are correlated by a modified Bagnold model, it seems of particular interest to study its relationship with annual to decadal dune behavior. To gain insight in dune behavior and the processes covering dune growth, long-term changes in dune volume at the Belgian coast are analyzed based on measured data by airborne surveys. The Belgian government has been monitoring the eastern part of the coastline since 1979, and since 1983 the entire coastline by annually or bi-annually surveying cross-shore bathymetric profiles and collecting airborne photogrammetric and, since 1999, airborne Laser Scanner (LiDAR) data. For most of the 65 km long coastal stretch, a linearly dune growth is found. It varies between 0-12.3 m³/m/year with an average linear dune growth of 6.2 m³/m/year, featuring large spatial variations in longshore directions. The dune volume is defined as the volume of sand above the dune foot level. The dune foot level along the Belgian coast is defined at +6.89 m TAW (Belgium Ordnance Datum). In this thesis, the longshore spatial and temporal variations in dune volume changes are derived and correlated with potential sediment transport. Based on a wind data set from the period between 2000 and 2017, it is found that potential aeolian sediment transport has its main drift from west to south-west direction (onshore to oblique onshore). Based on the modified Bagnold model, onshore potential aeolian sediment transport ranges maximum to 9 m³/m/year, while longshore potential aeolian sediment transport could reach up to 20 m³/m/year. An important correlation is found between observed and predicted dune development at decadal timescales when zones with dune managing activities are excluded. Most of the predicted data are within a factor 2 of the measured values. The variability in potential transport is well related to the variability in dune volume changes at the considered spatial-temporal scale, suggesting that natural dune growth is primarily caused by aeolian sediment transport from the beach. It also suggests that annual differences in forcing and transport limiting conditions (wind and moisture) only have a modest effect on the overall variability of dune volume trends.At the Belgian coast, the beach profile is also regularly altered by human intervention to limit aeolian sand towards the hinterland as it often results in large depositions of sand on neighboring roads and tram tracks. Each year, the municipalities do large investments in the maintenance of their streets and sewer systems. A field experiment was designed to carry out simultaneous measurements of wind and sediment transport across a human-constructed high berm with a steep seaward cliff that is backed by a dyke. In front of the dyke, a trench is excavated to prevent aeolian sand being blown to the hinterland. Two sets of measurements were carried out, one with oblique onshore and one with winds directly onshore. Over-steepened velocity profiles and thus large shear velocities were measured at the steep cliff during the onshore wind event compared to the back beach due to flow compression and acceleration. The fetch effect has been measured across the flat berm where maximum transport was achieved at a distance of 20 to 35 m of the berm lip. The fetch effect is characterized with an overshoot during the oblique onshore wind event. Sand flux rapidly increased towards a maximum value followed by a decrease to a lower equilibrium value which was approximately half of the maximum mass flux obtained at the critical fetch distance. The evolution of the vertical mass flux profiles downwind caused the grain distribution above the surface (decay rate) to increase almost linear with increasing fetch length further away from the berm lip, until an equilibrium is achieved. This means that the distribution of particle trajectories changed similar until it was stable for different transport events on a flat dry beach surface. Based on this study, the steep cliff in front of the human-constructed coastal berm is very sensitive to erosion due to aeolian sand transport. Sand being eroded from the berm lip is deposited in front of the dyke and in the trench.When studying aeolian sediment transport in coastal zones, often a location is chosen where the number of supply-limiting factors is minimal (e.g. moisture, shells, vegetation) to ensure better comparison between predicted and observed values. However, as is often the case in a natural coastal environment, the beach contains bed irregularities caused by wind action, patches of pebbles, beach wrack, shells and shell-fragments, vegetation and beach litter. The effect of these small-scale bed features is frequently disregarded when conducting field experiments, even sometimes called insignificant. Therefore, the effect of largely scattered shell pavement on aeolian sand transport on the upper beach of a natural beach-dune system was studied during a short-term field experiment in the winter of 2016 in Belgium. The coverage of shell pavement on the upper beach increased towards the dunes and was highest just in front of the dune foot. Continuous sand transport occurred during strong highly oblique onshore wind and was measured during two experiments. During the two experiments, spatial variations in aeolian sand transport indicate that there was a consistent decrease in transport rate with distance downwind. Within 162 m, aeolian sand transport decreased by factor of 10 from the high waterline in the direction of the dunes. The negative gradient in transport caused local deposition of sand on the upper beach in the form of mobile rippled sand strips. This accumulation of sand acted as a new source area for aeolian transport to the dunes when the intertidal beach was inundated. However, as this region is also very sensitive to wave run-up, the accumulated sand may be removed again from the upper beach. The vertical distribution and median grain size of airborne sand particles across the shell-fragmented beach remained constant.The main conclusions of this research are that on a short-time scale (hours to days), aeolian sediment transport rate is cubic related with wind speed by a modified Bagnold model. On decadal timescales, an important correlation between observed and predicted dune development is also found. This indicates that dune growth is primarily caused by aeolian sediment transport from the beach and that annual differences in forcing and transport limiting conditions only have a slight effect on the overall variability of dune volume trends. It also suggests that the modified Bagnold model proves to perform strong on longer timescales. During moderate onshore and oblique onshore wind, measurements on a high flat berm with a steep seaward cliff indicate the presence of the fetch and overshoot effect. It was observed that the evolution of the vertical mass flux profiles downwind causes the exponential decay rate to increase almost linear with increasing fetch length until an equilibrium decay rate is achieved. The effect of shell pavement and moisture on a beach is significant and cannot be disregarded. Aeolian sand transport can be reduced by a factor 10 within a short distance downwind, causing local accumulation of sand which is not entering the dunes directly. Though, shells do not have an influence on the vertical distribution and grain size of airborne sand particles downwind. Further research should focus on better quantifying aeolian sediment transport rates by more innovative monitoring techniques, especially when long-term monitoring is required. Further research should also focus more on the influence of bed roughness and its feedback to the wind shear velocity on aeolian sediment transport. Additionally, a change in decadal dune behavior due to climate change is also very relevant to study.
Choose an application
Stainless steel combines high mechanical properties with excellent corrosion resistance, making it an appealing choice for load-bearing elements in civil engineering, greatly reducing maintenance for structures in aggressive environments, such as coastal areas. The nonlinear stress strain curve, high strain hardening at relatively low strains and the differences in residual stresses for welded specimens compared to carbon steel necessitate specific design treatment for structural stainless steel. This thesis focusses on the improvement of the design rules for stainless steel beams suffering lateral torsional buckling. In addition, improvements on the shear buckling resistance and the strength of stainless steel fillet welds were investigated.Firstly, an extensive experimental programme is presented on 13 beams suffering lateral torsional buckling made out of two lean duplex grades, EN 1.4062 and EN 1.4162, and one austenitic grade, EN 1.4404. Additionally, one carbon steel beam (S235) and one stainless steel beam (EN 1.4062) suffering shear buckling were tested, as well as 24 fillet welds, made of three stainless steel grades (EN 1.4062, EN 1.4404 and EN 1.4307) and two welding processes (GMAW and GTAW) under tension and shear. During these experiments, LVDTs, load cells, inclinometers and digital image correlation (DIC) were used to measure the displacement field and applied loads. In addition, DIC was also used for the 3D measurements of the initial geometric imperfections of the beams and for the measurement of the fracture area of the fillet welds.Secondly, geometrically and materially nonlinear numerical models were validated against the lateral torsional buckling experiments, together with experiments collated in the literature. This was followed by a parametric study on the fundamental case of lateral torsional buckling: a beam supported by fork supports loaded by a constant moment. This parametric study covered 30 cross-section geometries, with beam slendernesses ranging between 0.3 and 1.95, and three stainless steel grades, one from each stainless steel family. The results showed that the current design rules slightly overestimate the lateral torsional buckling strengths in the lower slenderness range for ferritic and austenitic stainless steel, and that, for beams with higher slendernesses, undue conservatism is present. Based on a reliability analysis of the numerical results, improvements to the stainless steel design rules were proposed, including the derivation of individual imperfection factors for each stainless steel family. Those improvements were inspired by the recent new design proposal by Taras and Greiner for carbon steel beams suffering lateral torsional buckling. It improves the safety of the design rules in the lower slenderness range, while greatly reducing the conservatism for higher slendernesses resulting in a more efficient, more consistent and safer design.Thirdly, for shear buckling, the current design rules were assessed using a parametric study based on geometrically and materially nonlinear numerical models, which were initially validated on the performed shear buckling experiment and experiments found in the literature. A sensitivity analysis investigated the effect of all design parameters on the predictions of the shear buckling strengths. Based on the numerical results and the observed failure modes for the shear buckling experiments, an extra term was included in the current shear buckling design equation, taking into account the stiffness of the non-rigid end post. Although more research is needed to further validate this proposal, more efficient predictions were achieved which is very promising.Last, the strength of stainless steel fillet welds was investigated. It is a complex subject which therefore should rely on a large experimental basis due to the many variables influencing the weld behaviour. By analysing the experiments carried out in the frame of this work and comparing them to results found in the literature, large scatter between the different testing programmes was noticed. We concluded that a uniform measurement method of the fracture area is crucial to get consistent predictions. Nevertheless, improvements to the correlation factor βw could still be proposed based on a reliability analysis of all experiments, allowing again a more efficient design of welds.
Choose an application
Nowadays, the focus of reliability-based validation of the post-fire bearing capacity of structures is on multi-storey buildings used for dwellings or offices due to the high human impact related to the use of the building. Research has been done using first order reliability-based methods to assess the behaviour of simple single isostatic concrete slabs, which is commonly used in this type of buildings. But, the knowledge of the post-fire behaviour of structures is also of high economic importance for industry and companies. Today, a lack of knowledge on the behaviour of single storey framed hyper static steel structures exist. The main goal of this research is to simulate the consequences of natural fire on industrial steel or composite structures, to compare the simulations against results available in the literature and to provide a method to evaluate the remaining structural load-bearing capacity. With only a limited amount of data, using numerical techniques, the temperature distribution in enclosures can be known and, from that, the structural response be predicted. Furthermore, it is known that the failure mode, in case of fire, of this kind of framed structures is always determined by the moment resisting beam-column nodes. A force-based method directly delivers the needed information for a framework and can be easily arranged in a statistical format. With the results of the theoretical analysis, the residual bearing capacity of the considered structure can be assessed after exposure to the fire.
Choose an application
The growing world population puts a tremendous strain on the use of natural resources, which forces the construction sector to find alternative building materials. Therefore, the use of recycled aggregates, originating from construction and demolition waste, has known a growing interest in the last decades. Nevertheless, there is still a lack of confidence in the use of recycled aggregates in structural concrete. In addition, the use of fibres in concrete showed a significant improvement of the post-cracking behaviour compared to conventional concrete. This led to the design of the constitutive tensile model for fibre reinforced concrete, given in the fib Model Code 2010. This model is mainly based on the scientific outcome of steel fibre reinforced concrete with natural aggregates. From this perspective, this PhD-work expanded the research domain to the application of recycled aggregates and other fibre types. Therein, the focus is put on both the verification and optimization of the constitutive tensile model for other fibre types than steel fibres, as well as the influence of recycled aggregates on the post-cracking behaviour of fibre reinforced concrete.
Choose an application
As-design and as-built Building Information Models have gained increased popularity in the Architecture, Engineering and Construction industry. The number of countries that require such models during the constructional workflow and at project completion has increased significantly in the past years. Apart from documentation purposes, it is hoped that this demand will also propel innovation in the construction industry, commonly known as rather conservative. Even today, in a very digitised and automated world, the construction workflow with in particular the monitoring of executed works by site managers, mainly happens in a manual way. In the eternal drive to optimise profits, this fact forms a strange anomaly. In this project a series of (semi-)automated workflows is developed to augment the current construction site monitoring practices. The performed research follows a logical start-to-end pipeline, analogue to the manual monitoring analyses in-place, but in a more digitised and data-driven fashion. The following contributions are part of this research.Data acquisitionConstruction site images form the main input of all developed frameworks. However, only since recently the information they contain can be harnessed fully. Factors that have led to this include the advent of digital cameras that accurately depict the building scene, the availability of sufficient processing power and the increasingly performant photogrammetric pipelines. Nevertheless, construction environments remain very challenging to capture and pose many obstacles. This research covers the devices, methodology and challenges to accurately record image data. Current data acquisition workflows are examined and extended towards construction industry purposes to facilitate accurate, yet accessible data recording sessions.Data processingOnly if the captured imagery can be positioned correctly (relative to each other but also to other datasets such as the as-design BIM model), the information the pictures contain can be successfully used in data-driven monitoring approaches. Relying on the traditional photogrammetric pipelines and advancing the state-of-the-art, a framework is developed that realises to process and (geo-)reference multi-temporal image data without the need for tedious and error-prone ground control point indication. One of the major conclusions is that, using the presented approach, not only valuable time can be saved but also the accuracy of the construction site datasets rises.Data analysesOnce the imagery has been photogrammetrically processed, it can be used to compare the reality to the design. The detected discrepancies between both worlds allow for updating the as-design to an as-built BIM model. Two separate frameworks are developed for these analyses. A first relies on the 3D geometry of the point cloud data to determine all element deviations. By not only considering the optimal shift of the considered element itself but also the dominant transformation of the cluster of its nearest neighbours, deviations can be determined virtually unaffected by georeferencing and drift errors. While we prove this to be true for the performed experiments with synthetic data, optimisations are still required to transfer the method's potential to realistic environments. Secondly, a purely image-based approach was developed that evaluates element positions by comparing reality images with duplicate virtual images of the BIM environment, called BIMages. The latter are created via the known characteristics of all images retrieved in the preceding photogrammetric process. The fact that reality image and BIMage do not fully agree in the case of element deviations is exploited. Displacements are intentionally induced to the BIM element and through image pair similarity evaluations it is assessed what the optimal displacement is, i.e. what the deviation of that element is. Excellent results are achieved in the experiments showcasing the method's large potential. However, tests in more complex building environments are still required.Data visualisationWhile experts are able to detect various anomaly patterns, this frequently is not the case for site managers who are less familiar with three-dimensional remote sensing data and deviation analyses. Therefore, the final step consists of insightfully outputting the obtained results via clear and comprehensible visualisations or via an even more immersive Virtual Reality environment. The developed proof of concept applications allow for timely presentations of recorded data and analysis results, thus satisfying an industry where developments follow each other rapidly and timely error detections are crucial to lower the omni-present failure costs.
Choose an application
Recycled aggregate (RA) is obtained by sorting, crushing and sieving inorganic material previously used in construction. When properly designed, substituting RA for natural aggregate (NA) in new construction is considered a sustainable practice that could help the European Union to achieve climate neutrality by 2050. Compared with NA, RA has complex composition, high porosity and poor engineering properties. Appropriate use of RA according to its quality class and application grade is highly critical. Several studies since 2000 have shown that it is feasible to produce high strength concrete (HSC) or high performance concrete (HPC) using RA derived from HSC waste. However, the sample size is limited, the repeatability (within-laboratory) and reproducibility (between-laboratory) are unknown, the role of RA in HSC/HPC is not fully understood yet, and design guidelines are lacking in the literature. To fill those gaps, since January 2017, the Materials and Constructions Division at KU Leuven Bruges Campus (RecyCon) has been working extensively on structural precast concrete incorporating coarse RA. This is the third PhD dissertation within that framework. The objective of this PhD research is to develop precast concrete products using commercial coarse RAs, while attempting to identify determinants, to explore mechanisms and to provide design guidelines. The methodology includes literature review, experimental study and database analysis. The research results show that recycled concrete aggregate (RCA) conforming to NBN B 15-001 Type A+ is very promising for the development of HPC with compressive strength classes from C50/60 to C80/95. Mixed recycled aggregate (MRA) complying with NBN B 15-001 Type B+ is not recommended for mass usage in self-compacting concrete (SCC) with compressive strength classes of C50/60 and C55/67. When developing recycled aggregate concrete (RAC), four key factors should always be kept in mind, namely RA quality, concrete compressive strength class, RA quantity and water compensation degree. In particular, micro-Deval coefficient (MDE) and Los Angeles coefficient (LA) are found to be effective quality indicators for coarse RA as alternatives to or simultaneously with oven-dried particle density (ρrd) and 24 h water absorption (WA24). Furthermore, the volume fraction of coarse RA (i.e. its absolute volume per unit volume of concrete, m3/m3) is considered to be a more accurate quantity indicator than the replacement percentage of coarse NA (either in volume or in weight), and therefore a new empirical model is proposed to predict the 28 d compressive strength of concrete made with coarse RCA. Moreover, the water compensation degree for coarse MRA is recommended to be between 80% and 100% of its 24 h water absorption, so as not to alter noticeably the carbonation resistance of concrete, and therefore a concept of new mortar cover for coarse MRA is proposed. Finally, the incorporation of coarse RA affects the confinement effect of concrete. Literature using only cubic specimens may overestimate the cylinder compressive strength of RAC. It is anticipated that this PhD dissertation will support policy makers in refining the RA classifications specified in NBN B 15-001 and relaxing the restrictions on the use of high-quality RCA in structural concrete. A proposed amendment is given in the end of this dissertation. In addition, researchers are encouraged to pay more attention than before to the mechanical properties (MDE and LA) and water compensation degree of coarse RA for use in HPC, and to start using both cubic and cylindrical specimens for the compressive strength test of RAC.
Choose an application
Concrete with ordinary Portland cement (OPC) as a primary binder has become the most produced building material worldwide over the last decades with yearly production levels reaching approximately 30 billion tons. However, the cement industry is annually responsible for 5-8% of the global anthropogenic CO2 emissions, and the number is predicted to be increased by 12-23% in 2050 because of the rapid development of the infrastructural construction. In this context, there has been an increasing need to explore alternative cementitious materials, especially those from the waste stream. This thesis focuses on the feasibility of using cutter soil mixing residue (CSMR), a solid waste consisting of a soil-cement mixture, to develop more sustainable cementitious materials. Firstly, CSMR was firstly thermally activated at 800 °C. The chemical and mineral compositions of the raw and calcined CSMR were characterized. Then, the calcined CSMR was used as 1) a supplementary cementitious material (CSM) to develop blended cement and 2) a precursor to synthesize alkali-activated cement (AAC). The performances of those new cement binders were studied in terms of the fresh properties, hydration kinetics and products, and mechanical properties. Finally, those new binders were used to synthesize strain-hardening cementitious composites (SHCCs), which are a special type of high-performance concrete with ultra-high ductility. The binder properties, mechanical properties, and micromechanics parameters of the SHCCs were investigated. Moreover, the embodied energy and carbon emissions of the cement-based materials prepared with the calcined were analyzed. The calcination allows the formation of Ca-rich amorphous aluminosilicates with pozzolanic activity and C2S with hydraulic properties. The former is derived from the synergy between clay minerals and Ca-bearing phases (e.g., calcite and hydrated cement) during the calcination, while the latter is generated from the dehydrated cement. The formation of those phases with pozzolanic and/or hydraulic properties suggests the potential of the calcined CSMR as an SCM in blended cement or a precursor in AAC.The specific surface area (SSA) of the calcined CSMR plays a dominant role in the fresh properties of the blended cement. A calcined CSMR sample with a higher (or lower) SSA than OPC is prone to increasing (or decreasing) the cement paste's yielding stress and plastic viscosity. The calcined CSMR can contribute to the strength development of the blended cement through increased hydration degree, accelerated hydration rate, pozzolanic reaction, and the creation of additional hydrates. An almost linear correlation exists between the reactive phase content in the calcined CSMR (from XRD characterization) and the compressive strength of blended cement pastes. The incorporation of up to 20% calcined CSMR has no detrimental effect on the compressive strength of the cement mortar. The addition of the calcined cement is conducive to limiting the dry shrinkage and improving the durability (e.g., sulfate and chloride resistances) of the cement mortar. Using the calcined CSMR in SHCCs as a partial cement replacement increases the tensile strain capacity by modifying the fiber-matrix interfacial properties, while maintaining the compressive strength. The new SHCCs has reduced embodied energy and CO2 emissions by 13-39% and 17-50%, respectively. The alkali-activated calcined CSMR (AA-CSMR) cement shows rapid early strength development and can achieve a maximum 28-day compressive strength of 33.2 MPa, which can be increased to 50 MPa or even higher by either incorporating 10% slag or curing at 70 °C for 1 day. The primary alkaline reaction products are C-(A)-S-H gels, which could verify the formation of the reactive calcium-rich amorphous phases in the calcined CSMR. Compared with the OPC-based SHCCs, the SHCCs prepared with AA-CSMR cement exhibit comparable or even better mechanical properties, while reducing around 40 % embodied energy and about 60% CO2 emissions. The results of this research support the use of calcined CSMR as greener cementitious materials. Considering the high complexity of the composition in CSMR, systematic research on more CSMR samples from various sources is needed in the follow-up research.
Listing 1 - 10 of 17 | << page >> |
Sort by
|